Test Report: Docker_Linux_crio_arm64 22402

                    
                      783b0304fb34eb1d9554b20c324bb66df0781ba8:2026-01-11:43196
                    
                

Test fail (27/332)

x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable volcano --alsologtostderr -v=1: exit status 11 (327.232541ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:17:03.401423  583638 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:17:03.402327  583638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:17:03.402379  583638 out.go:374] Setting ErrFile to fd 2...
	I0111 08:17:03.402402  583638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:17:03.402723  583638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:17:03.403087  583638 mustload.go:66] Loading cluster: addons-328805
	I0111 08:17:03.403531  583638 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:17:03.403580  583638 addons.go:622] checking whether the cluster is paused
	I0111 08:17:03.403736  583638 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:17:03.403778  583638 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:17:03.404419  583638 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:17:03.431753  583638 ssh_runner.go:195] Run: systemctl --version
	I0111 08:17:03.431826  583638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:17:03.450817  583638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:17:03.562386  583638 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:17:03.562501  583638 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:17:03.609891  583638 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:17:03.609971  583638 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:17:03.609991  583638 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:17:03.610011  583638 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:17:03.610044  583638 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:17:03.610072  583638 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:17:03.610092  583638 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:17:03.610112  583638 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:17:03.610176  583638 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:17:03.610209  583638 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:17:03.610244  583638 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:17:03.610265  583638 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:17:03.610312  583638 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:17:03.610339  583638 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:17:03.610364  583638 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:17:03.610412  583638 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:17:03.610440  583638 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:17:03.610463  583638 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:17:03.610485  583638 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:17:03.610522  583638 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:17:03.610554  583638 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:17:03.610573  583638 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:17:03.610592  583638 cri.go:96] found id: ""
	I0111 08:17:03.610683  583638 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:17:03.628416  583638 out.go:203] 
	W0111 08:17:03.631387  583638 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:17:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:17:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:17:03.631438  583638 out.go:285] * 
	* 
	W0111 08:17:03.637405  583638 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:17:03.640380  583638 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Registry (56.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 9.855024ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-8s2dv" [1609f04e-6ee9-47e4-b676-e38186ae2b70] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003731718s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-dksf6" [769e8897-81a7-4ed4-9c67-40a68686c465] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004486334s
addons_test.go:394: (dbg) Run:  kubectl --context addons-328805 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-328805 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-328805 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (45.911048929s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 ip
2026/01/11 08:18:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable registry --alsologtostderr -v=1: exit status 11 (264.629055ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:18:10.146654  585018 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:18:10.147419  585018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:10.147462  585018 out.go:374] Setting ErrFile to fd 2...
	I0111 08:18:10.147503  585018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:10.147943  585018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:18:10.148361  585018 mustload.go:66] Loading cluster: addons-328805
	I0111 08:18:10.148799  585018 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:10.148844  585018 addons.go:622] checking whether the cluster is paused
	I0111 08:18:10.148998  585018 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:10.149038  585018 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:18:10.149592  585018 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:18:10.170213  585018 ssh_runner.go:195] Run: systemctl --version
	I0111 08:18:10.170281  585018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:18:10.188008  585018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:18:10.292747  585018 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:18:10.292834  585018 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:18:10.322719  585018 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:18:10.322740  585018 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:18:10.322746  585018 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:18:10.322749  585018 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:18:10.322753  585018 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:18:10.322756  585018 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:18:10.322760  585018 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:18:10.322763  585018 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:18:10.322766  585018 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:18:10.322771  585018 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:18:10.322774  585018 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:18:10.322777  585018 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:18:10.322780  585018 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:18:10.322784  585018 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:18:10.322787  585018 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:18:10.322796  585018 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:18:10.322800  585018 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:18:10.322805  585018 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:18:10.322813  585018 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:18:10.322817  585018 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:18:10.322821  585018 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:18:10.322824  585018 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:18:10.322828  585018 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:18:10.322831  585018 cri.go:96] found id: ""
	I0111 08:18:10.322880  585018 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:18:10.338092  585018 out.go:203] 
	W0111 08:18:10.341245  585018 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:18:10.341270  585018 out.go:285] * 
	* 
	W0111 08:18:10.345376  585018 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:18:10.348409  585018 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (56.46s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.305842ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-328805
addons_test.go:334: (dbg) Run:  kubectl --context addons-328805 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (251.554421ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:18:43.012914  586530 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:18:43.013740  586530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:43.013782  586530 out.go:374] Setting ErrFile to fd 2...
	I0111 08:18:43.013811  586530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:43.014297  586530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:18:43.015105  586530 mustload.go:66] Loading cluster: addons-328805
	I0111 08:18:43.015541  586530 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:43.015568  586530 addons.go:622] checking whether the cluster is paused
	I0111 08:18:43.015692  586530 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:43.015715  586530 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:18:43.016230  586530 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:18:43.034880  586530 ssh_runner.go:195] Run: systemctl --version
	I0111 08:18:43.034955  586530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:18:43.052690  586530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:18:43.157546  586530 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:18:43.157653  586530 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:18:43.187775  586530 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:18:43.187848  586530 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:18:43.187867  586530 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:18:43.187888  586530 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:18:43.187908  586530 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:18:43.187940  586530 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:18:43.187958  586530 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:18:43.187977  586530 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:18:43.188005  586530 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:18:43.188038  586530 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:18:43.188067  586530 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:18:43.188094  586530 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:18:43.188112  586530 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:18:43.188131  586530 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:18:43.188149  586530 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:18:43.188194  586530 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:18:43.188213  586530 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:18:43.188233  586530 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:18:43.188251  586530 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:18:43.188285  586530 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:18:43.188309  586530 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:18:43.188326  586530 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:18:43.188343  586530 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:18:43.188371  586530 cri.go:96] found id: ""
	I0111 08:18:43.188457  586530 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:18:43.203314  586530 out.go:203] 
	W0111 08:18:43.206011  586530 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:18:43.206041  586530 out.go:285] * 
	* 
	W0111 08:18:43.210341  586530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:18:43.213031  586530 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-328805 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-328805 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-328805 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [f64ef5ca-fdf4-46f0-8171-8cf1c4488475] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [f64ef5ca-fdf4-46f0-8171-8cf1c4488475] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004103977s
I0111 08:18:34.812386  576907 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-328805 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (304.914097ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:18:35.918530  586230 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:18:35.919312  586230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:35.919327  586230 out.go:374] Setting ErrFile to fd 2...
	I0111 08:18:35.919333  586230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:35.919633  586230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:18:35.919959  586230 mustload.go:66] Loading cluster: addons-328805
	I0111 08:18:35.920380  586230 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:35.920421  586230 addons.go:622] checking whether the cluster is paused
	I0111 08:18:35.920565  586230 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:35.920600  586230 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:18:35.921148  586230 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:18:35.939000  586230 ssh_runner.go:195] Run: systemctl --version
	I0111 08:18:35.939062  586230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:18:35.961814  586230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:18:36.104659  586230 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:18:36.104753  586230 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:18:36.139121  586230 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:18:36.139147  586230 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:18:36.139152  586230 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:18:36.139156  586230 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:18:36.139159  586230 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:18:36.139163  586230 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:18:36.139166  586230 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:18:36.139169  586230 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:18:36.139172  586230 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:18:36.139183  586230 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:18:36.139186  586230 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:18:36.139189  586230 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:18:36.139192  586230 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:18:36.139195  586230 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:18:36.139198  586230 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:18:36.139204  586230 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:18:36.139207  586230 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:18:36.139211  586230 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:18:36.139214  586230 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:18:36.139217  586230 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:18:36.139221  586230 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:18:36.139224  586230 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:18:36.139227  586230 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:18:36.139230  586230 cri.go:96] found id: ""
	I0111 08:18:36.139288  586230 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:18:36.157439  586230 out.go:203] 
	W0111 08:18:36.160437  586230 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:18:36.160465  586230 out.go:285] * 
	* 
	W0111 08:18:36.164616  586230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:18:36.167793  586230 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable ingress --alsologtostderr -v=1: exit status 11 (264.425499ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:18:36.236137  586304 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:18:36.236890  586304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:36.236904  586304 out.go:374] Setting ErrFile to fd 2...
	I0111 08:18:36.236932  586304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:36.237320  586304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:18:36.237701  586304 mustload.go:66] Loading cluster: addons-328805
	I0111 08:18:36.238106  586304 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:36.238165  586304 addons.go:622] checking whether the cluster is paused
	I0111 08:18:36.238289  586304 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:36.238312  586304 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:18:36.238827  586304 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:18:36.256545  586304 ssh_runner.go:195] Run: systemctl --version
	I0111 08:18:36.256693  586304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:18:36.274601  586304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:18:36.376586  586304 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:18:36.376674  586304 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:18:36.406782  586304 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:18:36.406802  586304 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:18:36.406807  586304 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:18:36.406811  586304 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:18:36.406815  586304 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:18:36.406819  586304 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:18:36.406822  586304 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:18:36.406825  586304 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:18:36.406836  586304 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:18:36.406845  586304 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:18:36.406848  586304 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:18:36.406852  586304 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:18:36.406854  586304 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:18:36.406857  586304 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:18:36.406861  586304 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:18:36.406866  586304 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:18:36.406869  586304 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:18:36.406872  586304 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:18:36.406876  586304 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:18:36.406879  586304 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:18:36.406883  586304 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:18:36.406886  586304 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:18:36.406889  586304 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:18:36.406892  586304 cri.go:96] found id: ""
	I0111 08:18:36.406943  586304 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:18:36.421884  586304 out.go:203] 
	W0111 08:18:36.424835  586304 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:18:36.424866  586304 out.go:285] * 
	* 
	W0111 08:18:36.428949  586304 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:18:36.431947  586304 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (11.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-858g6" [54d9dd3b-6387-44b7-9bd5-140cb1c8e01c] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004756907s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (300.258637ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:18:42.517287  586477 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:18:42.518211  586477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:42.518235  586477 out.go:374] Setting ErrFile to fd 2...
	I0111 08:18:42.518242  586477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:42.520411  586477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:18:42.520872  586477 mustload.go:66] Loading cluster: addons-328805
	I0111 08:18:42.521590  586477 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:42.521620  586477 addons.go:622] checking whether the cluster is paused
	I0111 08:18:42.521803  586477 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:42.521834  586477 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:18:42.525889  586477 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:18:42.546014  586477 ssh_runner.go:195] Run: systemctl --version
	I0111 08:18:42.546072  586477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:18:42.564094  586477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:18:42.674010  586477 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:18:42.674101  586477 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:18:42.710433  586477 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:18:42.710458  586477 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:18:42.710463  586477 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:18:42.710467  586477 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:18:42.710470  586477 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:18:42.710474  586477 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:18:42.710478  586477 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:18:42.710481  586477 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:18:42.710484  586477 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:18:42.710495  586477 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:18:42.710498  586477 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:18:42.710502  586477 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:18:42.710505  586477 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:18:42.710509  586477 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:18:42.710512  586477 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:18:42.710518  586477 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:18:42.710521  586477 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:18:42.710526  586477 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:18:42.710530  586477 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:18:42.710533  586477 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:18:42.710538  586477 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:18:42.710546  586477 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:18:42.710549  586477 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:18:42.710552  586477 cri.go:96] found id: ""
	I0111 08:18:42.710611  586477 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:18:42.725650  586477 out.go:203] 
	W0111 08:18:42.728511  586477 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:18:42.728541  586477 out.go:285] * 
	* 
	W0111 08:18:42.732768  586477 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:18:42.739215  586477 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 6.181074ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-gbb2s" [e44de34d-0dac-4e63-973d-54b6b57440ab] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004166443s
addons_test.go:465: (dbg) Run:  kubectl --context addons-328805 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (283.929612ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:18:24.943657  585584 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:18:24.944425  585584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:24.944439  585584 out.go:374] Setting ErrFile to fd 2...
	I0111 08:18:24.944446  585584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:24.944739  585584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:18:24.945065  585584 mustload.go:66] Loading cluster: addons-328805
	I0111 08:18:24.946443  585584 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:24.946475  585584 addons.go:622] checking whether the cluster is paused
	I0111 08:18:24.946604  585584 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:24.946627  585584 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:18:24.947159  585584 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:18:24.964887  585584 ssh_runner.go:195] Run: systemctl --version
	I0111 08:18:24.964946  585584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:18:24.989725  585584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:18:25.101358  585584 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:18:25.101448  585584 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:18:25.140736  585584 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:18:25.140779  585584 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:18:25.140788  585584 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:18:25.140792  585584 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:18:25.140796  585584 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:18:25.140800  585584 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:18:25.140804  585584 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:18:25.140810  585584 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:18:25.140813  585584 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:18:25.140820  585584 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:18:25.140824  585584 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:18:25.140828  585584 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:18:25.140831  585584 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:18:25.140837  585584 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:18:25.140847  585584 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:18:25.140853  585584 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:18:25.140856  585584 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:18:25.140862  585584 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:18:25.140875  585584 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:18:25.140879  585584 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:18:25.140884  585584 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:18:25.140887  585584 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:18:25.140898  585584 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:18:25.140901  585584 cri.go:96] found id: ""
	I0111 08:18:25.140972  585584 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:18:25.159315  585584 out.go:203] 
	W0111 08:18:25.162316  585584 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:18:25.162346  585584 out.go:285] * 
	* 
	W0111 08:18:25.166783  585584 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:18:25.169667  585584 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0111 08:18:19.779310  576907 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0111 08:18:19.784495  576907 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0111 08:18:19.784531  576907 kapi.go:107] duration metric: took 5.235848ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 5.2475ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-328805 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-328805 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [ff3abfe2-a0ae-4da2-9e80-a6ab769e7673] Pending
helpers_test.go:353: "task-pv-pod" [ff3abfe2-a0ae-4da2-9e80-a6ab769e7673] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004359345s
addons_test.go:574: (dbg) Run:  kubectl --context addons-328805 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-328805 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-328805 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-328805 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-328805 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-328805 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-328805 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [8c507851-0f28-46fc-b351-2c25db2afe32] Pending
helpers_test.go:353: "task-pv-pod-restore" [8c507851-0f28-46fc-b351-2c25db2afe32] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003199785s
addons_test.go:616: (dbg) Run:  kubectl --context addons-328805 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-328805 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-328805 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (262.678055ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:19:00.899401  586853 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:19:00.900449  586853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:19:00.900502  586853 out.go:374] Setting ErrFile to fd 2...
	I0111 08:19:00.900524  586853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:19:00.900856  586853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:19:00.901255  586853 mustload.go:66] Loading cluster: addons-328805
	I0111 08:19:00.901724  586853 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:19:00.901773  586853 addons.go:622] checking whether the cluster is paused
	I0111 08:19:00.901926  586853 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:19:00.901964  586853 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:19:00.902611  586853 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:19:00.922540  586853 ssh_runner.go:195] Run: systemctl --version
	I0111 08:19:00.922596  586853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:19:00.940696  586853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:19:01.048763  586853 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:19:01.048908  586853 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:19:01.078384  586853 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:19:01.078404  586853 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:19:01.078409  586853 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:19:01.078413  586853 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:19:01.078445  586853 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:19:01.078456  586853 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:19:01.078460  586853 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:19:01.078464  586853 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:19:01.078473  586853 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:19:01.078494  586853 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:19:01.078503  586853 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:19:01.078507  586853 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:19:01.078520  586853 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:19:01.078530  586853 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:19:01.078533  586853 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:19:01.078539  586853 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:19:01.078542  586853 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:19:01.078546  586853 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:19:01.078549  586853 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:19:01.078552  586853 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:19:01.078558  586853 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:19:01.078567  586853 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:19:01.078570  586853 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:19:01.078573  586853 cri.go:96] found id: ""
	I0111 08:19:01.078638  586853 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:19:01.093513  586853 out.go:203] 
	W0111 08:19:01.096466  586853 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:19:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:19:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:19:01.096491  586853 out.go:285] * 
	* 
	W0111 08:19:01.100678  586853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:19:01.103730  586853 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (268.044194ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:19:01.167564  586894 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:19:01.168407  586894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:19:01.168422  586894 out.go:374] Setting ErrFile to fd 2...
	I0111 08:19:01.168430  586894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:19:01.168736  586894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:19:01.169070  586894 mustload.go:66] Loading cluster: addons-328805
	I0111 08:19:01.169509  586894 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:19:01.169550  586894 addons.go:622] checking whether the cluster is paused
	I0111 08:19:01.169705  586894 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:19:01.169724  586894 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:19:01.170338  586894 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:19:01.189371  586894 ssh_runner.go:195] Run: systemctl --version
	I0111 08:19:01.189507  586894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:19:01.208236  586894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:19:01.314867  586894 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:19:01.314951  586894 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:19:01.347156  586894 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:19:01.347184  586894 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:19:01.347203  586894 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:19:01.347208  586894 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:19:01.347229  586894 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:19:01.347235  586894 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:19:01.347238  586894 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:19:01.347242  586894 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:19:01.347245  586894 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:19:01.347266  586894 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:19:01.347277  586894 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:19:01.347281  586894 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:19:01.347284  586894 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:19:01.347287  586894 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:19:01.347290  586894 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:19:01.347316  586894 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:19:01.347320  586894 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:19:01.347333  586894 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:19:01.347339  586894 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:19:01.347343  586894 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:19:01.347349  586894 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:19:01.347355  586894 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:19:01.347358  586894 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:19:01.347361  586894 cri.go:96] found id: ""
	I0111 08:19:01.347428  586894 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:19:01.363378  586894 out.go:203] 
	W0111 08:19:01.366313  586894 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:19:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:19:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:19:01.366340  586894 out.go:285] * 
	* 
	W0111 08:19:01.370537  586894 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:19:01.374338  586894 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-328805 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-328805 --alsologtostderr -v=1: exit status 11 (267.990338ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:17:13.953117  583853 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:17:13.953856  583853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:17:13.953889  583853 out.go:374] Setting ErrFile to fd 2...
	I0111 08:17:13.953928  583853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:17:13.954711  583853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:17:13.955647  583853 mustload.go:66] Loading cluster: addons-328805
	I0111 08:17:13.956552  583853 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:17:13.956613  583853 addons.go:622] checking whether the cluster is paused
	I0111 08:17:13.956792  583853 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:17:13.956836  583853 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:17:13.957498  583853 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:17:13.974883  583853 ssh_runner.go:195] Run: systemctl --version
	I0111 08:17:13.974942  583853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:17:13.992870  583853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:17:14.096688  583853 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:17:14.096775  583853 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:17:14.127636  583853 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:17:14.127659  583853 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:17:14.127664  583853 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:17:14.127669  583853 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:17:14.127672  583853 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:17:14.127675  583853 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:17:14.127679  583853 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:17:14.127682  583853 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:17:14.127685  583853 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:17:14.127696  583853 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:17:14.127700  583853 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:17:14.127703  583853 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:17:14.127706  583853 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:17:14.127710  583853 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:17:14.127722  583853 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:17:14.127727  583853 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:17:14.127730  583853 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:17:14.127734  583853 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:17:14.127737  583853 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:17:14.127740  583853 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:17:14.127745  583853 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:17:14.127752  583853 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:17:14.127756  583853 cri.go:96] found id: ""
	I0111 08:17:14.127811  583853 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:17:14.147806  583853 out.go:203] 
	W0111 08:17:14.150770  583853 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:17:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:17:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:17:14.150840  583853 out.go:285] * 
	* 
	W0111 08:17:14.155099  583853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:17:14.158217  583853 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-328805 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-328805
helpers_test.go:244: (dbg) docker inspect addons-328805:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed362c56a844a7577e90a4a4bcf5515e93689407dafd866e8cb4c8fabd92adfa",
	        "Created": "2026-01-11T08:14:13.366653893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 578070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T08:14:13.431517148Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/ed362c56a844a7577e90a4a4bcf5515e93689407dafd866e8cb4c8fabd92adfa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed362c56a844a7577e90a4a4bcf5515e93689407dafd866e8cb4c8fabd92adfa/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed362c56a844a7577e90a4a4bcf5515e93689407dafd866e8cb4c8fabd92adfa/hosts",
	        "LogPath": "/var/lib/docker/containers/ed362c56a844a7577e90a4a4bcf5515e93689407dafd866e8cb4c8fabd92adfa/ed362c56a844a7577e90a4a4bcf5515e93689407dafd866e8cb4c8fabd92adfa-json.log",
	        "Name": "/addons-328805",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-328805:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-328805",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed362c56a844a7577e90a4a4bcf5515e93689407dafd866e8cb4c8fabd92adfa",
	                "LowerDir": "/var/lib/docker/overlay2/cf6f2dc6ee6a797ed1fe11ee04dee0e945893474bb6ec560f4f6640f8ca68417-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf6f2dc6ee6a797ed1fe11ee04dee0e945893474bb6ec560f4f6640f8ca68417/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf6f2dc6ee6a797ed1fe11ee04dee0e945893474bb6ec560f4f6640f8ca68417/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf6f2dc6ee6a797ed1fe11ee04dee0e945893474bb6ec560f4f6640f8ca68417/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-328805",
	                "Source": "/var/lib/docker/volumes/addons-328805/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-328805",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-328805",
	                "name.minikube.sigs.k8s.io": "addons-328805",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e81e211a7b55b40378496ef2f0e550a7866739adf01c271afe61be1dc0850348",
	            "SandboxKey": "/var/run/docker/netns/e81e211a7b55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-328805": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:7a:bd:af:7c:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31434f08dd36532c5c2e5f47ce86f81284c84b88c8372664828e0e7ae9413702",
	                    "EndpointID": "d90adfb0d4beee1135e1711114dc31bd39d0f018cb13f2728ebc4ad8fc8fafa7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-328805",
	                        "ed362c56a844"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-328805 -n addons-328805
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-328805 logs -n 25: (1.47879077s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-639464 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-639464   │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ delete  │ -p download-only-639464                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-639464   │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ start   │ -o=json --download-only -p download-only-637593 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-637593   │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ delete  │ -p download-only-637593                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-637593   │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ delete  │ -p download-only-639464                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-639464   │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ delete  │ -p download-only-637593                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-637593   │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ start   │ --download-only -p download-docker-783115 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-783115 │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │                     │
	│ delete  │ -p download-docker-783115                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-783115 │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ start   │ --download-only -p binary-mirror-442784 --alsologtostderr --binary-mirror http://127.0.0.1:34909 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-442784   │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │                     │
	│ delete  │ -p binary-mirror-442784                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-442784   │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ addons  │ enable dashboard -p addons-328805                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-328805          │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-328805                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-328805          │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │                     │
	│ start   │ -p addons-328805 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-328805          │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:17 UTC │
	│ addons  │ addons-328805 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-328805          │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │                     │
	│ addons  │ addons-328805 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-328805          │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │                     │
	│ addons  │ enable headlamp -p addons-328805 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-328805          │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:13:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:13:48.252704  577671 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:13:48.252914  577671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:13:48.252940  577671 out.go:374] Setting ErrFile to fd 2...
	I0111 08:13:48.252959  577671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:13:48.253245  577671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:13:48.253756  577671 out.go:368] Setting JSON to false
	I0111 08:13:48.254700  577671 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10578,"bootTime":1768108650,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:13:48.254799  577671 start.go:143] virtualization:  
	I0111 08:13:48.273380  577671 out.go:179] * [addons-328805] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:13:48.306269  577671 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:13:48.306304  577671 notify.go:221] Checking for updates...
	I0111 08:13:48.370551  577671 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:13:48.402110  577671 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:13:48.418906  577671 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:13:48.451375  577671 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:13:48.483600  577671 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:13:48.527785  577671 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:13:48.549330  577671 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:13:48.549446  577671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:13:48.608737  577671 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-11 08:13:48.599405629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:13:48.608850  577671 docker.go:319] overlay module found
	I0111 08:13:48.655010  577671 out.go:179] * Using the docker driver based on user configuration
	I0111 08:13:48.687274  577671 start.go:309] selected driver: docker
	I0111 08:13:48.687311  577671 start.go:928] validating driver "docker" against <nil>
	I0111 08:13:48.687327  577671 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:13:48.688182  577671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:13:48.742973  577671 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-11 08:13:48.734108884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:13:48.743131  577671 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:13:48.743391  577671 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 08:13:48.766331  577671 out.go:179] * Using Docker driver with root privileges
	I0111 08:13:48.798602  577671 cni.go:84] Creating CNI manager for ""
	I0111 08:13:48.798679  577671 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:13:48.798690  577671 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 08:13:48.798772  577671 start.go:353] cluster config:
	{Name:addons-328805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-328805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:13:48.830510  577671 out.go:179] * Starting "addons-328805" primary control-plane node in "addons-328805" cluster
	I0111 08:13:48.863022  577671 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 08:13:48.895076  577671 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:13:48.927503  577671 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:13:48.927579  577671 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 08:13:48.927590  577671 cache.go:65] Caching tarball of preloaded images
	I0111 08:13:48.927625  577671 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:13:48.927735  577671 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 08:13:48.927767  577671 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 08:13:48.928123  577671 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/config.json ...
	I0111 08:13:48.928151  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/config.json: {Name:mk613331b7d341ebe3ca4c6918daebcbfa5ac8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:13:48.944166  577671 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 08:13:48.944285  577671 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory
	I0111 08:13:48.944304  577671 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory, skipping pull
	I0111 08:13:48.944309  577671 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in cache, skipping pull
	I0111 08:13:48.944315  577671 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 as a tarball
	I0111 08:13:48.944320  577671 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 from local cache
	I0111 08:14:07.397984  577671 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 from cached tarball
	I0111 08:14:07.398027  577671 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:14:07.398078  577671 start.go:360] acquireMachinesLock for addons-328805: {Name:mke3223421c2fae940d6a2e4f040c861b8e0de97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:14:07.398237  577671 start.go:364] duration metric: took 132.137µs to acquireMachinesLock for "addons-328805"
	I0111 08:14:07.398273  577671 start.go:93] Provisioning new machine with config: &{Name:addons-328805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-328805 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 08:14:07.398345  577671 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:14:07.401700  577671 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0111 08:14:07.401949  577671 start.go:159] libmachine.API.Create for "addons-328805" (driver="docker")
	I0111 08:14:07.401987  577671 client.go:173] LocalClient.Create starting
	I0111 08:14:07.402140  577671 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 08:14:08.131810  577671 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 08:14:08.209949  577671 cli_runner.go:164] Run: docker network inspect addons-328805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:14:08.224237  577671 cli_runner.go:211] docker network inspect addons-328805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:14:08.224324  577671 network_create.go:284] running [docker network inspect addons-328805] to gather additional debugging logs...
	I0111 08:14:08.224345  577671 cli_runner.go:164] Run: docker network inspect addons-328805
	W0111 08:14:08.239374  577671 cli_runner.go:211] docker network inspect addons-328805 returned with exit code 1
	I0111 08:14:08.239406  577671 network_create.go:287] error running [docker network inspect addons-328805]: docker network inspect addons-328805: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-328805 not found
	I0111 08:14:08.239427  577671 network_create.go:289] output of [docker network inspect addons-328805]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-328805 not found
	
	** /stderr **
	I0111 08:14:08.239531  577671 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:14:08.255557  577671 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a33160}
	I0111 08:14:08.255606  577671 network_create.go:124] attempt to create docker network addons-328805 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0111 08:14:08.255660  577671 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-328805 addons-328805
	I0111 08:14:08.325966  577671 network_create.go:108] docker network addons-328805 192.168.49.0/24 created
	I0111 08:14:08.326000  577671 kic.go:121] calculated static IP "192.168.49.2" for the "addons-328805" container
	I0111 08:14:08.326079  577671 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:14:08.341789  577671 cli_runner.go:164] Run: docker volume create addons-328805 --label name.minikube.sigs.k8s.io=addons-328805 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:14:08.359244  577671 oci.go:103] Successfully created a docker volume addons-328805
	I0111 08:14:08.359344  577671 cli_runner.go:164] Run: docker run --rm --name addons-328805-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-328805 --entrypoint /usr/bin/test -v addons-328805:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:14:09.466348  577671 cli_runner.go:217] Completed: docker run --rm --name addons-328805-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-328805 --entrypoint /usr/bin/test -v addons-328805:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib: (1.106936984s)
	I0111 08:14:09.466379  577671 oci.go:107] Successfully prepared a docker volume addons-328805
	I0111 08:14:09.466423  577671 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:14:09.466436  577671 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:14:09.466505  577671 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-328805:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:14:13.299326  577671 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-328805:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.832781663s)
	I0111 08:14:13.299369  577671 kic.go:203] duration metric: took 3.832928726s to extract preloaded images to volume ...
	W0111 08:14:13.299514  577671 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:14:13.299615  577671 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:14:13.351667  577671 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-328805 --name addons-328805 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-328805 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-328805 --network addons-328805 --ip 192.168.49.2 --volume addons-328805:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:14:13.638571  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Running}}
	I0111 08:14:13.657002  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:13.678392  577671 cli_runner.go:164] Run: docker exec addons-328805 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:14:13.740077  577671 oci.go:144] the created container "addons-328805" has a running status.
	I0111 08:14:13.740104  577671 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa...
	I0111 08:14:14.097422  577671 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:14:14.124299  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:14.143356  577671 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:14:14.143376  577671 kic_runner.go:114] Args: [docker exec --privileged addons-328805 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:14:14.201424  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:14.219027  577671 machine.go:94] provisionDockerMachine start ...
	I0111 08:14:14.219127  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:14.236841  577671 main.go:144] libmachine: Using SSH client type: native
	I0111 08:14:14.237174  577671 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0111 08:14:14.237183  577671 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:14:14.237753  577671 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53044->127.0.0.1:33503: read: connection reset by peer
	I0111 08:14:17.385744  577671 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-328805
	
	I0111 08:14:17.385809  577671 ubuntu.go:182] provisioning hostname "addons-328805"
	I0111 08:14:17.385885  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:17.402923  577671 main.go:144] libmachine: Using SSH client type: native
	I0111 08:14:17.403263  577671 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0111 08:14:17.403281  577671 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-328805 && echo "addons-328805" | sudo tee /etc/hostname
	I0111 08:14:17.559426  577671 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-328805
	
	I0111 08:14:17.559514  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:17.576149  577671 main.go:144] libmachine: Using SSH client type: native
	I0111 08:14:17.576463  577671 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0111 08:14:17.576479  577671 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-328805' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-328805/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-328805' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:14:17.722437  577671 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:14:17.722508  577671 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 08:14:17.722543  577671 ubuntu.go:190] setting up certificates
	I0111 08:14:17.722582  577671 provision.go:84] configureAuth start
	I0111 08:14:17.722668  577671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-328805
	I0111 08:14:17.740424  577671 provision.go:143] copyHostCerts
	I0111 08:14:17.740510  577671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 08:14:17.740641  577671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 08:14:17.740701  577671 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 08:14:17.740759  577671 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.addons-328805 san=[127.0.0.1 192.168.49.2 addons-328805 localhost minikube]
	I0111 08:14:18.228591  577671 provision.go:177] copyRemoteCerts
	I0111 08:14:18.228662  577671 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:14:18.228703  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:18.245767  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:18.350029  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0111 08:14:18.368361  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 08:14:18.387048  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:14:18.404289  577671 provision.go:87] duration metric: took 681.66455ms to configureAuth
	I0111 08:14:18.404321  577671 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:14:18.404548  577671 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:14:18.404657  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:18.421491  577671 main.go:144] libmachine: Using SSH client type: native
	I0111 08:14:18.421827  577671 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0111 08:14:18.421848  577671 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 08:14:18.721411  577671 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 08:14:18.721480  577671 machine.go:97] duration metric: took 4.502432702s to provisionDockerMachine
	I0111 08:14:18.721496  577671 client.go:176] duration metric: took 11.319498951s to LocalClient.Create
	I0111 08:14:18.721514  577671 start.go:167] duration metric: took 11.319566636s to libmachine.API.Create "addons-328805"
	I0111 08:14:18.721522  577671 start.go:293] postStartSetup for "addons-328805" (driver="docker")
	I0111 08:14:18.721533  577671 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:14:18.721616  577671 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:14:18.721661  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:18.739339  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:18.842108  577671 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:14:18.845436  577671 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:14:18.845466  577671 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:14:18.845478  577671 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 08:14:18.845545  577671 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 08:14:18.845575  577671 start.go:296] duration metric: took 124.045849ms for postStartSetup
	I0111 08:14:18.845901  577671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-328805
	I0111 08:14:18.862119  577671 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/config.json ...
	I0111 08:14:18.862466  577671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:14:18.862519  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:18.879656  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:18.978992  577671 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:14:18.983525  577671 start.go:128] duration metric: took 11.585164326s to createHost
	I0111 08:14:18.983550  577671 start.go:83] releasing machines lock for "addons-328805", held for 11.585295158s
	I0111 08:14:18.983621  577671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-328805
	I0111 08:14:18.999937  577671 ssh_runner.go:195] Run: cat /version.json
	I0111 08:14:18.999998  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:19.000250  577671 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:14:19.000317  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:19.023378  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:19.035650  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:19.231461  577671 ssh_runner.go:195] Run: systemctl --version
	I0111 08:14:19.237874  577671 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 08:14:19.273944  577671 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:14:19.278480  577671 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:14:19.278602  577671 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:14:19.307153  577671 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:14:19.307179  577671 start.go:496] detecting cgroup driver to use...
	I0111 08:14:19.307213  577671 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 08:14:19.307264  577671 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:14:19.324615  577671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:14:19.337246  577671 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:14:19.337344  577671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:14:19.354853  577671 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:14:19.373668  577671 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:14:19.498094  577671 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:14:19.625228  577671 docker.go:234] disabling docker service ...
	I0111 08:14:19.625301  577671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:14:19.645814  577671 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:14:19.659220  577671 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:14:19.776927  577671 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:14:19.897371  577671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:14:19.910675  577671 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:14:19.924557  577671 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 08:14:19.924623  577671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:14:19.933234  577671 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 08:14:19.933314  577671 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:14:19.942574  577671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:14:19.951488  577671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:14:19.960261  577671 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:14:19.968373  577671 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:14:19.977497  577671 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:14:19.991663  577671 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:14:20.007570  577671 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:14:20.017431  577671 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:14:20.025936  577671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:14:20.139490  577671 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 08:14:20.318511  577671 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 08:14:20.318675  577671 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 08:14:20.322719  577671 start.go:574] Will wait 60s for crictl version
	I0111 08:14:20.322820  577671 ssh_runner.go:195] Run: which crictl
	I0111 08:14:20.326441  577671 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:14:20.352421  577671 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 08:14:20.352551  577671 ssh_runner.go:195] Run: crio --version
	I0111 08:14:20.380663  577671 ssh_runner.go:195] Run: crio --version
	I0111 08:14:20.416319  577671 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 08:14:20.419145  577671 cli_runner.go:164] Run: docker network inspect addons-328805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:14:20.435432  577671 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0111 08:14:20.439303  577671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:14:20.449227  577671 kubeadm.go:884] updating cluster {Name:addons-328805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-328805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:14:20.449364  577671 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:14:20.449428  577671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:14:20.490670  577671 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:14:20.490698  577671 crio.go:433] Images already preloaded, skipping extraction
	I0111 08:14:20.490757  577671 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:14:20.517990  577671 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:14:20.518018  577671 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:14:20.518027  577671 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I0111 08:14:20.518114  577671 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-328805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-328805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:14:20.518222  577671 ssh_runner.go:195] Run: crio config
	I0111 08:14:20.569002  577671 cni.go:84] Creating CNI manager for ""
	I0111 08:14:20.569025  577671 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:14:20.569041  577671 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:14:20.569066  577671 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-328805 NodeName:addons-328805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:14:20.569193  577671 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-328805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:14:20.569269  577671 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:14:20.577039  577671 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:14:20.577116  577671 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:14:20.584510  577671 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0111 08:14:20.597103  577671 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:14:20.609798  577671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0111 08:14:20.623081  577671 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:14:20.626478  577671 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:14:20.636132  577671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:14:20.759028  577671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:14:20.778809  577671 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805 for IP: 192.168.49.2
	I0111 08:14:20.778882  577671 certs.go:195] generating shared ca certs ...
	I0111 08:14:20.778915  577671 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:20.779091  577671 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 08:14:21.193540  577671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt ...
	I0111 08:14:21.193577  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt: {Name:mke59fe86c68e70a0572e5ddb9b8c240803e6bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:21.193793  577671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key ...
	I0111 08:14:21.193806  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key: {Name:mka47dd259b8f2602626ab107f3b619120c294f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:21.193912  577671 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 08:14:21.537757  577671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt ...
	I0111 08:14:21.537791  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt: {Name:mkfa701e1257f9265e0793674d68987cf2e5671c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:21.537994  577671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key ...
	I0111 08:14:21.538006  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key: {Name:mk31371dd0068bc80ab61a668dec4a02124519b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:21.538104  577671 certs.go:257] generating profile certs ...
	I0111 08:14:21.538187  577671 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.key
	I0111 08:14:21.538206  577671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt with IP's: []
	I0111 08:14:21.804481  577671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt ...
	I0111 08:14:21.804530  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: {Name:mk46f6502b04a49f65c79740d34f35b5269faae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:21.804729  577671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.key ...
	I0111 08:14:21.804743  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.key: {Name:mk54be25ffb1b6e262c784a6973ba1206b12528b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:21.804833  577671 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.key.acd03f51
	I0111 08:14:21.804853  577671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.crt.acd03f51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0111 08:14:22.006160  577671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.crt.acd03f51 ...
	I0111 08:14:22.006195  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.crt.acd03f51: {Name:mkae4448e74d4683f048e509b67044894dd98ea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:22.006402  577671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.key.acd03f51 ...
	I0111 08:14:22.006417  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.key.acd03f51: {Name:mk1823e27f6fc236145ccab2507514ae57b5a6df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:22.006505  577671 certs.go:382] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.crt.acd03f51 -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.crt
	I0111 08:14:22.006595  577671 certs.go:386] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.key.acd03f51 -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.key
	I0111 08:14:22.006654  577671 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/proxy-client.key
	I0111 08:14:22.006675  577671 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/proxy-client.crt with IP's: []
	I0111 08:14:22.287541  577671 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/proxy-client.crt ...
	I0111 08:14:22.287590  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/proxy-client.crt: {Name:mk2f67dcb8715b5eb185a921ea2d8deaa112baf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:22.287772  577671 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/proxy-client.key ...
	I0111 08:14:22.287789  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/proxy-client.key: {Name:mkf99518ac114b68fa937fa4a2ce1aa18f50f337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:22.287982  577671 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:14:22.288025  577671 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:14:22.288056  577671 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:14:22.288093  577671 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 08:14:22.288645  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:14:22.306446  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 08:14:22.325125  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:14:22.341967  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:14:22.359474  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0111 08:14:22.376978  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 08:14:22.394912  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:14:22.412170  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 08:14:22.429730  577671 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:14:22.448288  577671 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:14:22.461730  577671 ssh_runner.go:195] Run: openssl version
	I0111 08:14:22.468567  577671 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:14:22.476304  577671 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:14:22.484194  577671 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:14:22.490617  577671 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:14:22.490717  577671 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:14:22.532240  577671 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:14:22.539538  577671 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:14:22.546952  577671 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:14:22.550592  577671 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:14:22.550664  577671 kubeadm.go:401] StartCluster: {Name:addons-328805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-328805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:14:22.550757  577671 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:14:22.550825  577671 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:14:22.577980  577671 cri.go:96] found id: ""
	I0111 08:14:22.578058  577671 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:14:22.585748  577671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:14:22.593424  577671 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:14:22.593511  577671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:14:22.601147  577671 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:14:22.601206  577671 kubeadm.go:158] found existing configuration files:
	
	I0111 08:14:22.601283  577671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:14:22.608906  577671 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:14:22.608974  577671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:14:22.616042  577671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:14:22.623502  577671 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:14:22.623597  577671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:14:22.630810  577671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:14:22.638287  577671 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:14:22.638365  577671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:14:22.645420  577671 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:14:22.652838  577671 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:14:22.652905  577671 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:14:22.660062  577671 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:14:22.698637  577671 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:14:22.698749  577671 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:14:22.773052  577671 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:14:22.773131  577671 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:14:22.773173  577671 kubeadm.go:319] OS: Linux
	I0111 08:14:22.773238  577671 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:14:22.773293  577671 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:14:22.773346  577671 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:14:22.773422  577671 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:14:22.773479  577671 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:14:22.773540  577671 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:14:22.773626  577671 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:14:22.773680  577671 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:14:22.773736  577671 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:14:22.842909  577671 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:14:22.843023  577671 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:14:22.843125  577671 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:14:22.854533  577671 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:14:22.861095  577671 out.go:252]   - Generating certificates and keys ...
	I0111 08:14:22.861256  577671 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:14:22.861363  577671 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:14:22.927216  577671 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:14:23.738704  577671 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:14:23.896752  577671 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:14:24.189727  577671 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:14:24.486522  577671 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:14:24.486898  577671 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-328805 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0111 08:14:24.692404  577671 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:14:24.692815  577671 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-328805 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0111 08:14:24.959427  577671 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:14:25.138782  577671 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:14:25.253011  577671 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:14:25.253306  577671 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:14:25.417887  577671 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:14:26.255869  577671 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:14:26.675976  577671 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:14:27.038636  577671 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:14:27.578749  577671 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:14:27.579635  577671 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:14:27.582634  577671 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:14:27.586151  577671 out.go:252]   - Booting up control plane ...
	I0111 08:14:27.586257  577671 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:14:27.586344  577671 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:14:27.587408  577671 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:14:27.603842  577671 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:14:27.604184  577671 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:14:27.612326  577671 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:14:27.612693  577671 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:14:27.612909  577671 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:14:27.746257  577671 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:14:27.746382  577671 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:14:28.747397  577671 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001693765s
	I0111 08:14:28.751465  577671 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 08:14:28.751562  577671 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0111 08:14:28.751657  577671 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 08:14:28.751736  577671 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 08:14:30.262534  577671 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.510344392s
	I0111 08:14:32.173696  577671 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.422231318s
	I0111 08:14:33.753254  577671 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.00157756s
	I0111 08:14:33.785715  577671 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 08:14:33.800297  577671 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 08:14:33.815830  577671 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 08:14:33.816058  577671 kubeadm.go:319] [mark-control-plane] Marking the node addons-328805 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 08:14:33.830645  577671 kubeadm.go:319] [bootstrap-token] Using token: corgxx.jf59cqh29xyqztqc
	I0111 08:14:33.833538  577671 out.go:252]   - Configuring RBAC rules ...
	I0111 08:14:33.833670  577671 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 08:14:33.839331  577671 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 08:14:33.852043  577671 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 08:14:33.856031  577671 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 08:14:33.860172  577671 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 08:14:33.866237  577671 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 08:14:34.162376  577671 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 08:14:34.596992  577671 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 08:14:35.161847  577671 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 08:14:35.163361  577671 kubeadm.go:319] 
	I0111 08:14:35.163437  577671 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 08:14:35.163443  577671 kubeadm.go:319] 
	I0111 08:14:35.163520  577671 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 08:14:35.163524  577671 kubeadm.go:319] 
	I0111 08:14:35.163549  577671 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 08:14:35.164026  577671 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 08:14:35.164084  577671 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 08:14:35.164089  577671 kubeadm.go:319] 
	I0111 08:14:35.164143  577671 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 08:14:35.164147  577671 kubeadm.go:319] 
	I0111 08:14:35.164194  577671 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 08:14:35.164199  577671 kubeadm.go:319] 
	I0111 08:14:35.164250  577671 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 08:14:35.164325  577671 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 08:14:35.164394  577671 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 08:14:35.164398  577671 kubeadm.go:319] 
	I0111 08:14:35.164665  577671 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 08:14:35.164745  577671 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 08:14:35.164749  577671 kubeadm.go:319] 
	I0111 08:14:35.165014  577671 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token corgxx.jf59cqh29xyqztqc \
	I0111 08:14:35.165124  577671 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb \
	I0111 08:14:35.165315  577671 kubeadm.go:319] 	--control-plane 
	I0111 08:14:35.165326  577671 kubeadm.go:319] 
	I0111 08:14:35.165627  577671 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 08:14:35.165638  577671 kubeadm.go:319] 
	I0111 08:14:35.165897  577671 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token corgxx.jf59cqh29xyqztqc \
	I0111 08:14:35.166196  577671 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb 
	I0111 08:14:35.170304  577671 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:14:35.170762  577671 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:14:35.170906  577671 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:14:35.170942  577671 cni.go:84] Creating CNI manager for ""
	I0111 08:14:35.170955  577671 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:14:35.175939  577671 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 08:14:35.178811  577671 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 08:14:35.183138  577671 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 08:14:35.183162  577671 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 08:14:35.197159  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 08:14:35.489810  577671 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 08:14:35.489903  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:35.489987  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-328805 minikube.k8s.io/updated_at=2026_01_11T08_14_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=addons-328805 minikube.k8s.io/primary=true
	I0111 08:14:35.653418  577671 ops.go:34] apiserver oom_adj: -16
	I0111 08:14:35.653528  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:36.153716  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:36.654302  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:37.153799  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:37.654149  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:38.153941  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:38.653659  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:39.153659  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:39.654624  577671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:14:39.757653  577671 kubeadm.go:1114] duration metric: took 4.267805474s to wait for elevateKubeSystemPrivileges
	I0111 08:14:39.757717  577671 kubeadm.go:403] duration metric: took 17.207078442s to StartCluster
	I0111 08:14:39.757737  577671 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:39.757886  577671 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:14:39.758489  577671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:14:39.758733  577671 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 08:14:39.758840  577671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 08:14:39.759088  577671 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:14:39.759131  577671 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0111 08:14:39.759223  577671 addons.go:70] Setting yakd=true in profile "addons-328805"
	I0111 08:14:39.759243  577671 addons.go:239] Setting addon yakd=true in "addons-328805"
	I0111 08:14:39.759265  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.759759  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.760310  577671 addons.go:70] Setting inspektor-gadget=true in profile "addons-328805"
	I0111 08:14:39.760336  577671 addons.go:239] Setting addon inspektor-gadget=true in "addons-328805"
	I0111 08:14:39.760367  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.760802  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.760934  577671 addons.go:70] Setting metrics-server=true in profile "addons-328805"
	I0111 08:14:39.760950  577671 addons.go:239] Setting addon metrics-server=true in "addons-328805"
	I0111 08:14:39.760980  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.761401  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.761723  577671 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-328805"
	I0111 08:14:39.761745  577671 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-328805"
	I0111 08:14:39.761770  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.762193  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.762254  577671 out.go:179] * Verifying Kubernetes components...
	I0111 08:14:39.762423  577671 addons.go:70] Setting cloud-spanner=true in profile "addons-328805"
	I0111 08:14:39.762443  577671 addons.go:239] Setting addon cloud-spanner=true in "addons-328805"
	I0111 08:14:39.762463  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.762851  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.764713  577671 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-328805"
	I0111 08:14:39.764744  577671 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-328805"
	I0111 08:14:39.764777  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.765200  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.770225  577671 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-328805"
	I0111 08:14:39.770293  577671 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-328805"
	I0111 08:14:39.770323  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.770804  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.773566  577671 addons.go:70] Setting registry=true in profile "addons-328805"
	I0111 08:14:39.773638  577671 addons.go:239] Setting addon registry=true in "addons-328805"
	I0111 08:14:39.773675  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.775260  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.782951  577671 addons.go:70] Setting default-storageclass=true in profile "addons-328805"
	I0111 08:14:39.782979  577671 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-328805"
	I0111 08:14:39.783314  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.785365  577671 addons.go:70] Setting registry-creds=true in profile "addons-328805"
	I0111 08:14:39.785397  577671 addons.go:239] Setting addon registry-creds=true in "addons-328805"
	I0111 08:14:39.785430  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.785987  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.808046  577671 addons.go:70] Setting storage-provisioner=true in profile "addons-328805"
	I0111 08:14:39.808077  577671 addons.go:239] Setting addon storage-provisioner=true in "addons-328805"
	I0111 08:14:39.808129  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.808665  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.812931  577671 addons.go:70] Setting gcp-auth=true in profile "addons-328805"
	I0111 08:14:39.812962  577671 mustload.go:66] Loading cluster: addons-328805
	I0111 08:14:39.813161  577671 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:14:39.813448  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.832512  577671 addons.go:70] Setting ingress=true in profile "addons-328805"
	I0111 08:14:39.832546  577671 addons.go:239] Setting addon ingress=true in "addons-328805"
	I0111 08:14:39.832589  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.833086  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.836401  577671 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-328805"
	I0111 08:14:39.836430  577671 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-328805"
	I0111 08:14:39.836836  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.854553  577671 addons.go:70] Setting ingress-dns=true in profile "addons-328805"
	I0111 08:14:39.854583  577671 addons.go:239] Setting addon ingress-dns=true in "addons-328805"
	I0111 08:14:39.854659  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.855300  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.855681  577671 addons.go:70] Setting volcano=true in profile "addons-328805"
	I0111 08:14:39.855714  577671 addons.go:239] Setting addon volcano=true in "addons-328805"
	I0111 08:14:39.855741  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.856224  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.882031  577671 addons.go:70] Setting volumesnapshots=true in profile "addons-328805"
	I0111 08:14:39.882082  577671 addons.go:239] Setting addon volumesnapshots=true in "addons-328805"
	I0111 08:14:39.882201  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:39.882895  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:39.916803  577671 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:14:39.971273  577671 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I0111 08:14:39.976857  577671 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.48.0
	I0111 08:14:39.982616  577671 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0111 08:14:39.982794  577671 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0111 08:14:39.982813  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0111 08:14:39.982916  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:39.991099  577671 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0111 08:14:40.021920  577671 addons.go:239] Setting addon default-storageclass=true in "addons-328805"
	I0111 08:14:40.021988  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:40.035941  577671 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I0111 08:14:40.037738  577671 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I0111 08:14:40.040715  577671 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I0111 08:14:40.040797  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0111 08:14:40.040912  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.050368  577671 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0111 08:14:40.050415  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0111 08:14:40.050556  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.061210  577671 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0111 08:14:40.061293  577671 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0111 08:14:40.061407  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.072407  577671 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0111 08:14:40.072441  577671 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0111 08:14:40.072536  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.090465  577671 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 08:14:40.091421  577671 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0111 08:14:40.091502  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0111 08:14:40.091765  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.107056  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:40.109536  577671 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0111 08:14:40.115615  577671 out.go:179]   - Using image docker.io/registry:3.0.0
	W0111 08:14:40.120756  577671 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0111 08:14:40.121021  577671 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0111 08:14:40.133840  577671 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 08:14:40.133941  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 08:14:40.134075  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.160175  577671 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I0111 08:14:40.160203  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0111 08:14:40.160295  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.160794  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:40.190718  577671 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I0111 08:14:40.192816  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.220795  577671 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-328805"
	I0111 08:14:40.220891  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:40.221496  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:40.236763  577671 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0111 08:14:40.236958  577671 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0111 08:14:40.251396  577671 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0111 08:14:40.250436  577671 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0111 08:14:40.259132  577671 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0111 08:14:40.259275  577671 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0111 08:14:40.259287  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0111 08:14:40.259364  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.284680  577671 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0111 08:14:40.284703  577671 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0111 08:14:40.292299  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0111 08:14:40.292856  577671 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0111 08:14:40.292870  577671 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0111 08:14:40.292924  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.293164  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.318294  577671 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0111 08:14:40.318414  577671 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0111 08:14:40.322552  577671 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0111 08:14:40.322579  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I0111 08:14:40.322647  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.322849  577671 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0111 08:14:40.325635  577671 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0111 08:14:40.331176  577671 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0111 08:14:40.331857  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.333347  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.343696  577671 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0111 08:14:40.347210  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.353865  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.364204  577671 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0111 08:14:40.364292  577671 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0111 08:14:40.364590  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.382311  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.388743  577671 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 08:14:40.388766  577671 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 08:14:40.388834  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.400980  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.437743  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.443487  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.479029  577671 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0111 08:14:40.486141  577671 out.go:179]   - Using image docker.io/busybox:stable
	I0111 08:14:40.493864  577671 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0111 08:14:40.493894  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0111 08:14:40.493962  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:40.511395  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.512579  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	W0111 08:14:40.516523  577671 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0111 08:14:40.516573  577671 retry.go:84] will retry after 300ms: ssh: handshake failed: EOF
	I0111 08:14:40.542319  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.542796  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:40.543103  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	W0111 08:14:40.548307  577671 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0111 08:14:40.562241  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	W0111 08:14:40.563352  577671 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0111 08:14:40.580528  577671 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:14:40.580715  577671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 08:14:40.727419  577671 node_ready.go:35] waiting up to 6m0s for node "addons-328805" to be "Ready" ...
	W0111 08:14:40.749299  577671 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0111 08:14:40.887513  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I0111 08:14:40.902453  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0111 08:14:40.966793  577671 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0111 08:14:40.966821  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0111 08:14:41.001706  577671 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0111 08:14:41.001746  577671 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0111 08:14:41.005259  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0111 08:14:41.025939  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0111 08:14:41.035508  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 08:14:41.044348  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 08:14:41.083854  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0111 08:14:41.112380  577671 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0111 08:14:41.112407  577671 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0111 08:14:41.126896  577671 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0111 08:14:41.126937  577671 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0111 08:14:41.169624  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0111 08:14:41.225893  577671 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I0111 08:14:41.225938  577671 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0111 08:14:41.234222  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0111 08:14:41.277242  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0111 08:14:41.278015  577671 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0111 08:14:41.278036  577671 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0111 08:14:41.300280  577671 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0111 08:14:41.300308  577671 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0111 08:14:41.325453  577671 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0111 08:14:41.325493  577671 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0111 08:14:41.376728  577671 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0111 08:14:41.376759  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I0111 08:14:41.406204  577671 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0111 08:14:41.406237  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0111 08:14:41.424957  577671 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0111 08:14:41.424986  577671 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0111 08:14:41.547225  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0111 08:14:41.693150  577671 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0111 08:14:41.693179  577671 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0111 08:14:41.718025  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0111 08:14:41.759274  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0111 08:14:42.049876  577671 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0111 08:14:42.049908  577671 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0111 08:14:42.315132  577671 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0111 08:14:42.315209  577671 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0111 08:14:42.622456  577671 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0111 08:14:42.622520  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	W0111 08:14:42.730307  577671 node_ready.go:57] node "addons-328805" has "Ready":"False" status (will retry)
	I0111 08:14:42.798959  577671 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0111 08:14:42.799024  577671 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0111 08:14:42.972093  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0111 08:14:43.023144  577671 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0111 08:14:43.023224  577671 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0111 08:14:43.038876  577671 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.458135569s)
	I0111 08:14:43.038972  577671 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0111 08:14:43.504855  577671 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0111 08:14:43.504878  577671 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0111 08:14:43.542987  577671 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-328805" context rescaled to 1 replicas
	I0111 08:14:43.831822  577671 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0111 08:14:43.831892  577671 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0111 08:14:44.189825  577671 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0111 08:14:44.189886  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0111 08:14:44.442593  577671 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0111 08:14:44.442675  577671 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0111 08:14:44.589071  577671 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0111 08:14:44.589136  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	W0111 08:14:44.803060  577671 node_ready.go:57] node "addons-328805" has "Ready":"False" status (will retry)
	I0111 08:14:44.835562  577671 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0111 08:14:44.835626  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0111 08:14:45.104993  577671 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0111 08:14:45.105069  577671 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0111 08:14:45.334014  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0111 08:14:46.172419  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.28487134s)
	I0111 08:14:46.172507  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.270027379s)
	I0111 08:14:46.172586  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.167302168s)
	I0111 08:14:46.172678  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.137138516s)
	I0111 08:14:46.172723  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.128343123s)
	I0111 08:14:46.172760  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.088876084s)
	I0111 08:14:46.172778  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.003131008s)
	I0111 08:14:46.172795  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.938549905s)
	I0111 08:14:46.172802  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.146663254s)
	W0111 08:14:46.305446  577671 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	W0111 08:14:47.256761  577671 node_ready.go:57] node "addons-328805" has "Ready":"False" status (will retry)
	I0111 08:14:47.274986  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.99770189s)
	I0111 08:14:47.275031  577671 addons.go:495] Verifying addon ingress=true in "addons-328805"
	I0111 08:14:47.275362  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.728081637s)
	I0111 08:14:47.275384  577671 addons.go:495] Verifying addon metrics-server=true in "addons-328805"
	I0111 08:14:47.275444  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.557388466s)
	I0111 08:14:47.275485  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.516160643s)
	I0111 08:14:47.275508  577671 addons.go:495] Verifying addon registry=true in "addons-328805"
	I0111 08:14:47.275826  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.303652286s)
	W0111 08:14:47.275859  577671 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0111 08:14:47.275886  577671 retry.go:84] will retry after 200ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0111 08:14:47.278264  577671 out.go:179] * Verifying registry addon...
	I0111 08:14:47.278279  577671 out.go:179] * Verifying ingress addon...
	I0111 08:14:47.278264  577671 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-328805 service yakd-dashboard -n yakd-dashboard
	
	I0111 08:14:47.282227  577671 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0111 08:14:47.283069  577671 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0111 08:14:47.289775  577671 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0111 08:14:47.289803  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:47.289935  577671 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0111 08:14:47.289945  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:47.483626  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0111 08:14:47.601670  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.267571157s)
	I0111 08:14:47.601708  577671 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-328805"
	I0111 08:14:47.604852  577671 out.go:179] * Verifying csi-hostpath-driver addon...
	I0111 08:14:47.608339  577671 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0111 08:14:47.618908  577671 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0111 08:14:47.618929  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:47.788593  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:47.788900  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:47.841797  577671 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0111 08:14:47.841969  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:47.866063  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:47.979477  577671 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0111 08:14:47.993880  577671 addons.go:239] Setting addon gcp-auth=true in "addons-328805"
	I0111 08:14:47.993926  577671 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:14:47.994463  577671 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:14:48.015661  577671 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0111 08:14:48.015719  577671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:14:48.032974  577671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:14:48.111829  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:48.285788  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:48.285921  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:48.612446  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:48.784788  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:48.786100  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:49.111473  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:49.286422  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:49.286841  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:49.611914  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0111 08:14:49.730709  577671 node_ready.go:57] node "addons-328805" has "Ready":"False" status (will retry)
	I0111 08:14:49.786056  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:49.786449  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:50.121519  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:50.268546  577671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.784830883s)
	I0111 08:14:50.268668  577671 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.252970021s)
	I0111 08:14:50.271561  577671 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0111 08:14:50.274373  577671 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0111 08:14:50.276978  577671 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0111 08:14:50.277009  577671 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0111 08:14:50.287422  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:50.287602  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:50.293310  577671 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0111 08:14:50.293341  577671 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0111 08:14:50.307915  577671 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0111 08:14:50.307936  577671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0111 08:14:50.321768  577671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0111 08:14:50.616346  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:50.798684  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:50.799873  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:50.837759  577671 addons.go:495] Verifying addon gcp-auth=true in "addons-328805"
	I0111 08:14:50.841008  577671 out.go:179] * Verifying gcp-auth addon...
	I0111 08:14:50.844573  577671 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0111 08:14:50.895800  577671 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0111 08:14:50.895824  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:51.112237  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:51.285759  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:51.286406  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:51.348122  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:51.612315  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:51.786421  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:51.786760  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:51.847993  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:52.111982  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0111 08:14:52.230911  577671 node_ready.go:57] node "addons-328805" has "Ready":"False" status (will retry)
	I0111 08:14:52.286343  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:52.286532  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:52.348404  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:52.611648  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:52.785332  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:52.786578  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:52.848364  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:53.111423  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:53.286173  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:53.286347  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:53.348189  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:53.611530  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:53.785026  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:53.786012  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:53.848323  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:54.112863  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0111 08:14:54.231051  577671 node_ready.go:57] node "addons-328805" has "Ready":"False" status (will retry)
	I0111 08:14:54.286473  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:54.286637  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:54.347347  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:54.611469  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:54.784932  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:54.787165  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:54.898824  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:55.216675  577671 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0111 08:14:55.216700  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:55.242920  577671 node_ready.go:49] node "addons-328805" is "Ready"
	I0111 08:14:55.242950  577671 node_ready.go:38] duration metric: took 14.515499193s for node "addons-328805" to be "Ready" ...
	I0111 08:14:55.242964  577671 api_server.go:52] waiting for apiserver process to appear ...
	I0111 08:14:55.243018  577671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:14:55.281242  577671 api_server.go:72] duration metric: took 15.522470922s to wait for apiserver process to appear ...
	I0111 08:14:55.281317  577671 api_server.go:88] waiting for apiserver healthz status ...
	I0111 08:14:55.281353  577671 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0111 08:14:55.320763  577671 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0111 08:14:55.324227  577671 api_server.go:141] control plane version: v1.35.0
	I0111 08:14:55.324308  577671 api_server.go:131] duration metric: took 42.967628ms to wait for apiserver health ...
	I0111 08:14:55.324336  577671 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 08:14:55.345454  577671 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0111 08:14:55.345530  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:55.346046  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:55.349137  577671 system_pods.go:59] 19 kube-system pods found
	I0111 08:14:55.349210  577671 system_pods.go:61] "coredns-7d764666f9-sgsk9" [3516b949-ddc5-40a8-bc82-e8bc371ec023] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:14:55.349235  577671 system_pods.go:61] "csi-hostpath-attacher-0" [1a219092-9d59-4d7d-879c-d18c198ffc87] Pending
	I0111 08:14:55.349285  577671 system_pods.go:61] "csi-hostpath-resizer-0" [3da21008-a555-4134-82c0-1a7ee7607aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0111 08:14:55.349311  577671 system_pods.go:61] "csi-hostpathplugin-mlwmm" [71e1a243-b876-4fe7-bcf8-bdb5f7047234] Pending
	I0111 08:14:55.349345  577671 system_pods.go:61] "etcd-addons-328805" [d64246a1-558f-4195-b692-efc2a46261ea] Running
	I0111 08:14:55.349366  577671 system_pods.go:61] "kindnet-qdjqq" [0b69a412-d009-4b19-ab47-bacbca2ce2b0] Running
	I0111 08:14:55.349397  577671 system_pods.go:61] "kube-apiserver-addons-328805" [75fb927c-f2b9-4389-9bec-794ad5442f87] Running
	I0111 08:14:55.349428  577671 system_pods.go:61] "kube-controller-manager-addons-328805" [ed60eed3-7285-457a-950e-4e52b75f3d6c] Running
	I0111 08:14:55.349450  577671 system_pods.go:61] "kube-ingress-dns-minikube" [075bde25-e840-4c04-babb-13b6e6e60aa2] Pending
	I0111 08:14:55.349471  577671 system_pods.go:61] "kube-proxy-lmsq4" [6e776930-95cd-4695-b675-c1fc163614cf] Running
	I0111 08:14:55.349502  577671 system_pods.go:61] "kube-scheduler-addons-328805" [540347af-4b80-4e65-84b6-7c5b4ec39eac] Running
	I0111 08:14:55.349526  577671 system_pods.go:61] "metrics-server-5778bb4788-gbb2s" [e44de34d-0dac-4e63-973d-54b6b57440ab] Pending
	I0111 08:14:55.349550  577671 system_pods.go:61] "nvidia-device-plugin-daemonset-gjpvb" [290d35b5-ee15-4b67-8a71-9b269776d8c4] Pending
	I0111 08:14:55.349573  577671 system_pods.go:61] "registry-788cd7d5bc-8s2dv" [1609f04e-6ee9-47e4-b676-e38186ae2b70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0111 08:14:55.349612  577671 system_pods.go:61] "registry-creds-567fb78d95-75qnp" [aa246809-0cd0-490d-a889-84860eb2548e] Pending
	I0111 08:14:55.349656  577671 system_pods.go:61] "registry-proxy-dksf6" [769e8897-81a7-4ed4-9c67-40a68686c465] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0111 08:14:55.349685  577671 system_pods.go:61] "snapshot-controller-6588d87457-rqdsw" [913bdfd0-3d4e-4279-aed1-a163de12b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 08:14:55.349709  577671 system_pods.go:61] "snapshot-controller-6588d87457-szpvw" [83d4f971-d1f0-44e1-b9b3-11af17ddb0a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 08:14:55.349757  577671 system_pods.go:61] "storage-provisioner" [029ced07-6b03-4dc3-83ed-7432def7b833] Pending
	I0111 08:14:55.349780  577671 system_pods.go:74] duration metric: took 25.424795ms to wait for pod list to return data ...
	I0111 08:14:55.349802  577671 default_sa.go:34] waiting for default service account to be created ...
	I0111 08:14:55.398901  577671 default_sa.go:45] found service account: "default"
	I0111 08:14:55.398978  577671 default_sa.go:55] duration metric: took 49.146948ms for default service account to be created ...
	I0111 08:14:55.399004  577671 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 08:14:55.421660  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:55.422562  577671 system_pods.go:86] 19 kube-system pods found
	I0111 08:14:55.422636  577671 system_pods.go:89] "coredns-7d764666f9-sgsk9" [3516b949-ddc5-40a8-bc82-e8bc371ec023] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:14:55.422661  577671 system_pods.go:89] "csi-hostpath-attacher-0" [1a219092-9d59-4d7d-879c-d18c198ffc87] Pending
	I0111 08:14:55.422702  577671 system_pods.go:89] "csi-hostpath-resizer-0" [3da21008-a555-4134-82c0-1a7ee7607aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0111 08:14:55.422728  577671 system_pods.go:89] "csi-hostpathplugin-mlwmm" [71e1a243-b876-4fe7-bcf8-bdb5f7047234] Pending
	I0111 08:14:55.422750  577671 system_pods.go:89] "etcd-addons-328805" [d64246a1-558f-4195-b692-efc2a46261ea] Running
	I0111 08:14:55.422786  577671 system_pods.go:89] "kindnet-qdjqq" [0b69a412-d009-4b19-ab47-bacbca2ce2b0] Running
	I0111 08:14:55.422811  577671 system_pods.go:89] "kube-apiserver-addons-328805" [75fb927c-f2b9-4389-9bec-794ad5442f87] Running
	I0111 08:14:55.422832  577671 system_pods.go:89] "kube-controller-manager-addons-328805" [ed60eed3-7285-457a-950e-4e52b75f3d6c] Running
	I0111 08:14:55.422870  577671 system_pods.go:89] "kube-ingress-dns-minikube" [075bde25-e840-4c04-babb-13b6e6e60aa2] Pending
	I0111 08:14:55.422897  577671 system_pods.go:89] "kube-proxy-lmsq4" [6e776930-95cd-4695-b675-c1fc163614cf] Running
	I0111 08:14:55.422918  577671 system_pods.go:89] "kube-scheduler-addons-328805" [540347af-4b80-4e65-84b6-7c5b4ec39eac] Running
	I0111 08:14:55.422953  577671 system_pods.go:89] "metrics-server-5778bb4788-gbb2s" [e44de34d-0dac-4e63-973d-54b6b57440ab] Pending
	I0111 08:14:55.422978  577671 system_pods.go:89] "nvidia-device-plugin-daemonset-gjpvb" [290d35b5-ee15-4b67-8a71-9b269776d8c4] Pending
	I0111 08:14:55.423002  577671 system_pods.go:89] "registry-788cd7d5bc-8s2dv" [1609f04e-6ee9-47e4-b676-e38186ae2b70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0111 08:14:55.423037  577671 system_pods.go:89] "registry-creds-567fb78d95-75qnp" [aa246809-0cd0-490d-a889-84860eb2548e] Pending
	I0111 08:14:55.423068  577671 system_pods.go:89] "registry-proxy-dksf6" [769e8897-81a7-4ed4-9c67-40a68686c465] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0111 08:14:55.423113  577671 system_pods.go:89] "snapshot-controller-6588d87457-rqdsw" [913bdfd0-3d4e-4279-aed1-a163de12b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 08:14:55.423141  577671 system_pods.go:89] "snapshot-controller-6588d87457-szpvw" [83d4f971-d1f0-44e1-b9b3-11af17ddb0a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 08:14:55.423161  577671 system_pods.go:89] "storage-provisioner" [029ced07-6b03-4dc3-83ed-7432def7b833] Pending
	I0111 08:14:55.423212  577671 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0111 08:14:55.614283  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:55.666903  577671 system_pods.go:86] 19 kube-system pods found
	I0111 08:14:55.666992  577671 system_pods.go:89] "coredns-7d764666f9-sgsk9" [3516b949-ddc5-40a8-bc82-e8bc371ec023] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:14:55.667017  577671 system_pods.go:89] "csi-hostpath-attacher-0" [1a219092-9d59-4d7d-879c-d18c198ffc87] Pending
	I0111 08:14:55.667059  577671 system_pods.go:89] "csi-hostpath-resizer-0" [3da21008-a555-4134-82c0-1a7ee7607aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0111 08:14:55.667084  577671 system_pods.go:89] "csi-hostpathplugin-mlwmm" [71e1a243-b876-4fe7-bcf8-bdb5f7047234] Pending
	I0111 08:14:55.667107  577671 system_pods.go:89] "etcd-addons-328805" [d64246a1-558f-4195-b692-efc2a46261ea] Running
	I0111 08:14:55.667143  577671 system_pods.go:89] "kindnet-qdjqq" [0b69a412-d009-4b19-ab47-bacbca2ce2b0] Running
	I0111 08:14:55.667170  577671 system_pods.go:89] "kube-apiserver-addons-328805" [75fb927c-f2b9-4389-9bec-794ad5442f87] Running
	I0111 08:14:55.667193  577671 system_pods.go:89] "kube-controller-manager-addons-328805" [ed60eed3-7285-457a-950e-4e52b75f3d6c] Running
	I0111 08:14:55.667238  577671 system_pods.go:89] "kube-ingress-dns-minikube" [075bde25-e840-4c04-babb-13b6e6e60aa2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0111 08:14:55.667264  577671 system_pods.go:89] "kube-proxy-lmsq4" [6e776930-95cd-4695-b675-c1fc163614cf] Running
	I0111 08:14:55.667288  577671 system_pods.go:89] "kube-scheduler-addons-328805" [540347af-4b80-4e65-84b6-7c5b4ec39eac] Running
	I0111 08:14:55.667327  577671 system_pods.go:89] "metrics-server-5778bb4788-gbb2s" [e44de34d-0dac-4e63-973d-54b6b57440ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0111 08:14:55.667347  577671 system_pods.go:89] "nvidia-device-plugin-daemonset-gjpvb" [290d35b5-ee15-4b67-8a71-9b269776d8c4] Pending
	I0111 08:14:55.667380  577671 system_pods.go:89] "registry-788cd7d5bc-8s2dv" [1609f04e-6ee9-47e4-b676-e38186ae2b70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0111 08:14:55.667404  577671 system_pods.go:89] "registry-creds-567fb78d95-75qnp" [aa246809-0cd0-490d-a889-84860eb2548e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0111 08:14:55.667429  577671 system_pods.go:89] "registry-proxy-dksf6" [769e8897-81a7-4ed4-9c67-40a68686c465] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0111 08:14:55.667467  577671 system_pods.go:89] "snapshot-controller-6588d87457-rqdsw" [913bdfd0-3d4e-4279-aed1-a163de12b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 08:14:55.667497  577671 system_pods.go:89] "snapshot-controller-6588d87457-szpvw" [83d4f971-d1f0-44e1-b9b3-11af17ddb0a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 08:14:55.667519  577671 system_pods.go:89] "storage-provisioner" [029ced07-6b03-4dc3-83ed-7432def7b833] Pending
	I0111 08:14:55.795828  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:55.804740  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:55.850940  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:55.980147  577671 system_pods.go:86] 19 kube-system pods found
	I0111 08:14:55.980233  577671 system_pods.go:89] "coredns-7d764666f9-sgsk9" [3516b949-ddc5-40a8-bc82-e8bc371ec023] Running
	I0111 08:14:55.980261  577671 system_pods.go:89] "csi-hostpath-attacher-0" [1a219092-9d59-4d7d-879c-d18c198ffc87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0111 08:14:55.980299  577671 system_pods.go:89] "csi-hostpath-resizer-0" [3da21008-a555-4134-82c0-1a7ee7607aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0111 08:14:55.980329  577671 system_pods.go:89] "csi-hostpathplugin-mlwmm" [71e1a243-b876-4fe7-bcf8-bdb5f7047234] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0111 08:14:55.980350  577671 system_pods.go:89] "etcd-addons-328805" [d64246a1-558f-4195-b692-efc2a46261ea] Running
	I0111 08:14:55.980385  577671 system_pods.go:89] "kindnet-qdjqq" [0b69a412-d009-4b19-ab47-bacbca2ce2b0] Running
	I0111 08:14:55.980410  577671 system_pods.go:89] "kube-apiserver-addons-328805" [75fb927c-f2b9-4389-9bec-794ad5442f87] Running
	I0111 08:14:55.980432  577671 system_pods.go:89] "kube-controller-manager-addons-328805" [ed60eed3-7285-457a-950e-4e52b75f3d6c] Running
	I0111 08:14:55.980474  577671 system_pods.go:89] "kube-ingress-dns-minikube" [075bde25-e840-4c04-babb-13b6e6e60aa2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0111 08:14:55.980501  577671 system_pods.go:89] "kube-proxy-lmsq4" [6e776930-95cd-4695-b675-c1fc163614cf] Running
	I0111 08:14:55.980522  577671 system_pods.go:89] "kube-scheduler-addons-328805" [540347af-4b80-4e65-84b6-7c5b4ec39eac] Running
	I0111 08:14:55.980555  577671 system_pods.go:89] "metrics-server-5778bb4788-gbb2s" [e44de34d-0dac-4e63-973d-54b6b57440ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0111 08:14:55.980582  577671 system_pods.go:89] "nvidia-device-plugin-daemonset-gjpvb" [290d35b5-ee15-4b67-8a71-9b269776d8c4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0111 08:14:55.980608  577671 system_pods.go:89] "registry-788cd7d5bc-8s2dv" [1609f04e-6ee9-47e4-b676-e38186ae2b70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0111 08:14:55.980642  577671 system_pods.go:89] "registry-creds-567fb78d95-75qnp" [aa246809-0cd0-490d-a889-84860eb2548e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0111 08:14:55.980669  577671 system_pods.go:89] "registry-proxy-dksf6" [769e8897-81a7-4ed4-9c67-40a68686c465] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0111 08:14:55.980693  577671 system_pods.go:89] "snapshot-controller-6588d87457-rqdsw" [913bdfd0-3d4e-4279-aed1-a163de12b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 08:14:55.980729  577671 system_pods.go:89] "snapshot-controller-6588d87457-szpvw" [83d4f971-d1f0-44e1-b9b3-11af17ddb0a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0111 08:14:55.980758  577671 system_pods.go:89] "storage-provisioner" [029ced07-6b03-4dc3-83ed-7432def7b833] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 08:14:55.980783  577671 system_pods.go:126] duration metric: took 581.758423ms to wait for k8s-apps to be running ...
	I0111 08:14:55.980823  577671 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 08:14:55.980913  577671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:14:55.999564  577671 system_svc.go:56] duration metric: took 18.73292ms WaitForService to wait for kubelet
	I0111 08:14:55.999646  577671 kubeadm.go:587] duration metric: took 16.240879414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 08:14:55.999680  577671 node_conditions.go:102] verifying NodePressure condition ...
	I0111 08:14:56.015100  577671 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 08:14:56.015182  577671 node_conditions.go:123] node cpu capacity is 2
	I0111 08:14:56.015211  577671 node_conditions.go:105] duration metric: took 15.509224ms to run NodePressure ...
	I0111 08:14:56.015239  577671 start.go:242] waiting for startup goroutines ...
	I0111 08:14:56.113856  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:56.286811  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:56.287363  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:56.348429  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:56.612260  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:56.788750  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:56.788847  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:56.848134  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:57.112386  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:57.287100  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:57.287512  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:57.347970  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:57.613132  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:57.784918  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:57.787487  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:57.848464  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:58.113052  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:58.287579  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:58.287969  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:58.347961  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:58.615944  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:58.787464  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:58.787741  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:58.848204  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:59.111328  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:59.285061  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:59.287366  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:59.348449  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:14:59.612946  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:14:59.789229  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:14:59.789692  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:14:59.848301  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:00.120302  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:00.287124  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:00.290482  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:00.357220  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:00.614354  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:00.788458  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:00.788585  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:00.848811  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:01.113198  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:01.287843  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:01.289603  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:01.348340  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:01.614359  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:01.787373  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:01.788869  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:01.848120  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:02.113867  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:02.286958  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:02.287965  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:02.348103  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:02.613542  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:02.787543  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:02.787887  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:02.847934  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:03.113319  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:03.287420  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:03.288660  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:03.348188  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:03.612726  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:03.787339  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:03.788654  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:03.848731  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:04.112890  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:04.290630  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:04.291646  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:04.348540  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:04.614027  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:04.787410  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:04.787538  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:04.847782  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:05.112414  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:05.287152  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:05.288331  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:05.348307  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:05.612452  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:05.786815  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:05.787412  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:05.849319  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:06.113439  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:06.287301  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:06.287570  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:06.349260  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:06.612891  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:06.791441  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:06.791950  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:06.848537  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:07.114187  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:07.287718  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:07.288796  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:07.348346  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:07.614015  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:07.787831  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:07.788209  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:07.849177  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:08.119331  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:08.286629  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:08.287615  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:08.348587  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:08.618073  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:08.789607  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:08.790248  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:08.848514  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:09.118002  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:09.288884  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:09.289518  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:09.347766  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:09.616607  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:09.787583  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:09.789150  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:09.848386  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:10.116300  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:10.289273  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:10.289415  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:10.348534  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:10.614688  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:10.819337  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:10.820002  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:10.848293  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:11.113226  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:11.292539  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:11.292914  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:11.348167  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:11.614738  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:11.788311  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:11.790451  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:11.849994  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:12.113099  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:12.287654  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:12.287999  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:12.348324  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:12.611554  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:12.787342  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:12.787816  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:12.847762  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:13.112795  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:13.288357  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:13.288635  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:13.348280  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:13.615548  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:13.787845  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:13.788285  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:13.848812  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:14.126476  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:14.288755  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:14.289158  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:14.348051  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:14.612475  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:14.786912  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:14.788076  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:14.847992  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:15.113124  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:15.285854  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:15.286059  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:15.348260  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:15.612545  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:15.787691  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:15.788189  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:15.848196  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:16.112189  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:16.287830  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:16.288153  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:16.348070  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:16.613075  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:16.787023  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:16.787114  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:16.852094  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:17.113151  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:17.285325  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:17.286316  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:17.348146  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:17.612494  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:17.785531  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:17.786032  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:17.847504  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:18.111746  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:18.288310  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:18.288416  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:18.388939  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:18.612467  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:18.787468  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:18.787830  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:18.848172  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:19.115653  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:19.287457  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:19.287670  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:19.347563  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:19.612496  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:19.787520  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:19.788052  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:19.848125  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:20.113611  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:20.308287  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:20.308497  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:20.348185  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:20.613090  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:20.786521  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:20.786482  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:20.848094  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:21.112460  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:21.287386  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:21.288719  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:21.348034  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:21.613003  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:21.786934  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:21.787142  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:21.848850  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:22.112917  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:22.287670  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:22.288406  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:22.348689  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:22.612648  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:22.786594  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:22.787396  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:22.848242  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:23.111655  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:23.287174  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:23.287863  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:23.388425  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:23.611505  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:23.787262  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:23.787651  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:23.848572  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:24.112473  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:24.286195  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:24.286883  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:24.347721  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:24.613379  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:24.785549  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:24.786789  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:24.847764  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:25.114507  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:25.287026  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:25.287701  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:25.347810  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:25.615528  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:25.787703  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:25.787859  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:25.847722  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:26.112828  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:26.286374  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:26.287583  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:26.347506  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:26.612530  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:26.786671  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:26.786857  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:26.848056  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:27.112507  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:27.289283  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:27.289714  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:27.347910  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:27.613387  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:27.785167  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:27.785779  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:27.847579  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:28.115465  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:28.288351  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:28.288752  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:28.388827  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:28.612812  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:28.786556  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:28.787637  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:28.847670  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:29.112172  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:29.286927  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:29.287078  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:29.348072  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:29.619578  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:29.785448  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:29.786657  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:29.847387  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:30.112544  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:30.287050  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:30.287208  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:30.348221  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:30.611936  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:30.786053  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:30.786678  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:30.847316  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:31.111566  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:31.286163  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:31.286323  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:31.347934  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:31.612178  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:31.785072  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:31.785512  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:31.847359  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:32.111544  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:32.288365  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:32.288548  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:32.348368  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:32.615840  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:32.786995  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:32.787147  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:32.847952  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:33.112434  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:33.286239  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:33.287212  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:33.348556  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:33.612015  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:33.785104  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:33.786207  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:33.885775  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:34.112288  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:34.286602  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:34.286774  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:34.387207  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:34.612686  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:34.786962  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:34.787550  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:34.847596  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:35.112499  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:35.287349  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:35.287564  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:35.347612  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:35.612780  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:35.787703  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:35.787857  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:35.847955  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:36.114371  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:36.286237  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:36.287489  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:36.348192  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:36.612410  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:36.787200  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:36.787349  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:36.848111  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:37.111701  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:37.288387  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:37.288553  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:37.347584  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:37.613396  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:37.786722  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0111 08:15:37.788335  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:37.848769  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:38.113302  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:38.298874  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:38.299095  577671 kapi.go:107] duration metric: took 51.01687304s to wait for kubernetes.io/minikube-addons=registry ...
	I0111 08:15:38.397516  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:38.612887  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:38.787450  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:38.848931  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:39.112356  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:39.286929  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:39.348015  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:39.616979  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:39.786261  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:39.848464  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:40.112338  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:40.286921  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:40.348338  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:40.612075  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:40.789357  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:40.848143  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:41.112378  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:41.291199  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:41.351342  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:41.612118  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:41.789183  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:41.848351  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:42.113018  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:42.288078  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:42.388681  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:42.613483  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:42.786803  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:42.848582  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:43.112642  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:43.294661  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:43.349039  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:43.614281  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:43.786438  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:43.849761  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:44.112512  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:44.287186  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:44.348800  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:44.612715  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:44.786980  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:44.848333  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:45.112773  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:45.288144  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:45.347738  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:45.613931  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:45.786007  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:45.848194  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:46.111939  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:46.286526  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:46.347353  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:46.612242  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:46.786479  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:46.848498  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:47.112141  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:47.286561  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:47.351559  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:47.636239  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:47.787027  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:47.848149  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:48.112177  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:48.290759  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:48.347694  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:48.613533  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:48.789375  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:48.887234  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:49.112992  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:49.286182  577671 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0111 08:15:49.348515  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:49.616431  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:49.786548  577671 kapi.go:107] duration metric: took 1m2.503477959s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0111 08:15:49.848242  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:50.112081  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:50.391816  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:50.612381  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:50.847867  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:51.112385  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:51.348339  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:51.612373  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:51.847837  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:52.112238  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:52.348157  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0111 08:15:52.615674  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:52.848516  577671 kapi.go:107] duration metric: took 1m2.003942707s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0111 08:15:52.853813  577671 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-328805 cluster.
	I0111 08:15:52.857196  577671 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0111 08:15:52.860514  577671 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0111 08:15:53.112064  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:53.611678  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:54.112674  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:54.612868  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:55.111358  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:55.611914  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:56.111288  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:56.613519  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:57.112376  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:57.612529  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:58.112691  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:58.612865  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:59.112377  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:15:59.612441  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:00.124883  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:00.612802  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:01.113181  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:01.612370  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:02.111687  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:02.613064  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:03.112126  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:03.612366  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:04.112226  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:04.612232  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:05.111690  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:05.612624  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:06.112478  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:06.612770  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:07.112988  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:07.614100  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:08.111507  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:08.612299  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:09.112101  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:09.618959  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:10.112338  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:10.612286  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:11.112207  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:11.611266  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:12.111835  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:12.612151  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:13.112204  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:13.612588  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:14.112264  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:14.611790  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:15.112487  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:15.612402  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:16.112386  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:16.612351  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:17.111530  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:17.612907  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:18.111366  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:18.612439  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:19.112163  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:19.615784  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:20.112791  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:20.613461  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:21.111656  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:21.612400  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:22.112272  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:22.613437  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:23.111675  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:23.612571  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:24.112663  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:24.612630  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:25.111895  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:25.612380  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:26.112129  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:26.612435  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:27.111756  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:27.616155  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:28.112115  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:28.612289  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:29.111586  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:29.615157  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:30.112386  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:30.612409  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:31.112051  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:31.613273  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:32.111656  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:32.614879  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:33.112693  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:33.618055  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:34.112178  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:34.612610  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:35.112767  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:35.612552  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:36.111658  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:36.612203  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:37.111846  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:37.615637  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:38.112837  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:38.612462  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:39.113069  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:39.617729  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:40.113245  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:40.611878  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:41.111492  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:41.612075  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:42.112069  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:42.611675  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:43.112576  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:43.614954  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:44.111916  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:44.613092  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:45.114636  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:45.612783  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:46.112746  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:46.612381  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:47.112157  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:47.619592  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:48.112555  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:48.612734  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:49.112554  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:49.614665  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:50.114357  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:50.612982  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:51.112173  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:51.611996  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:52.111448  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:52.612244  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:53.112468  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:53.612663  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:54.112820  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:54.612603  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:55.112704  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:55.615831  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:56.112043  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:56.620208  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:57.112183  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:57.612032  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:58.112354  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:58.612303  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:59.111942  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:16:59.616695  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:17:00.117034  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:17:00.611910  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:17:01.113926  577671 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0111 08:17:01.616009  577671 kapi.go:107] duration metric: took 2m14.007663344s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0111 08:17:01.619276  577671 out.go:179] * Enabled addons: inspektor-gadget, cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0111 08:17:01.622319  577671 addons.go:530] duration metric: took 2m21.86317929s for enable addons: enabled=[inspektor-gadget cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin amd-gpu-device-plugin registry-creds storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0111 08:17:01.622388  577671 start.go:247] waiting for cluster config update ...
	I0111 08:17:01.622420  577671 start.go:256] writing updated cluster config ...
	I0111 08:17:01.622749  577671 ssh_runner.go:195] Run: rm -f paused
	I0111 08:17:01.628671  577671 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 08:17:01.714555  577671 pod_ready.go:83] waiting for pod "coredns-7d764666f9-sgsk9" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:01.721378  577671 pod_ready.go:94] pod "coredns-7d764666f9-sgsk9" is "Ready"
	I0111 08:17:01.721414  577671 pod_ready.go:86] duration metric: took 6.833169ms for pod "coredns-7d764666f9-sgsk9" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:01.725901  577671 pod_ready.go:83] waiting for pod "etcd-addons-328805" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:01.732725  577671 pod_ready.go:94] pod "etcd-addons-328805" is "Ready"
	I0111 08:17:01.732753  577671 pod_ready.go:86] duration metric: took 6.823774ms for pod "etcd-addons-328805" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:01.735339  577671 pod_ready.go:83] waiting for pod "kube-apiserver-addons-328805" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:01.742013  577671 pod_ready.go:94] pod "kube-apiserver-addons-328805" is "Ready"
	I0111 08:17:01.742047  577671 pod_ready.go:86] duration metric: took 6.679518ms for pod "kube-apiserver-addons-328805" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:01.744873  577671 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-328805" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:02.033006  577671 pod_ready.go:94] pod "kube-controller-manager-addons-328805" is "Ready"
	I0111 08:17:02.033035  577671 pod_ready.go:86] duration metric: took 288.132621ms for pod "kube-controller-manager-addons-328805" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:02.234212  577671 pod_ready.go:83] waiting for pod "kube-proxy-lmsq4" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:02.632782  577671 pod_ready.go:94] pod "kube-proxy-lmsq4" is "Ready"
	I0111 08:17:02.632815  577671 pod_ready.go:86] duration metric: took 398.576992ms for pod "kube-proxy-lmsq4" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:02.834599  577671 pod_ready.go:83] waiting for pod "kube-scheduler-addons-328805" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:03.233714  577671 pod_ready.go:94] pod "kube-scheduler-addons-328805" is "Ready"
	I0111 08:17:03.233745  577671 pod_ready.go:86] duration metric: took 399.110365ms for pod "kube-scheduler-addons-328805" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:17:03.233759  577671 pod_ready.go:40] duration metric: took 1.605048544s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 08:17:03.291897  577671 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 08:17:03.295099  577671 out.go:203] 
	W0111 08:17:03.298733  577671 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 08:17:03.301719  577671 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 08:17:03.304810  577671 out.go:179] * Done! kubectl is now configured to use "addons-328805" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.353664596Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c7ab5f1f9415e46a93d46415b0ec3fc741748dc891bff551d56ca837f745d9f9 UID:43d6c10f-542c-4028-be58-ea31a363fd10 NetNS:/var/run/netns/a8e89f0f-e37a-41fa-9062-7d74566876e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400009d9f8}] Aliases:map[]}"
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.353716756Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.384076423Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c7ab5f1f9415e46a93d46415b0ec3fc741748dc891bff551d56ca837f745d9f9 UID:43d6c10f-542c-4028-be58-ea31a363fd10 NetNS:/var/run/netns/a8e89f0f-e37a-41fa-9062-7d74566876e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400009d9f8}] Aliases:map[]}"
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.384396919Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.38921467Z" level=info msg="Ran pod sandbox c7ab5f1f9415e46a93d46415b0ec3fc741748dc891bff551d56ca837f745d9f9 with infra container: default/busybox/POD" id=8d716f79-b189-428b-893e-c33d124acadc name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.391803576Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=088d36e3-e2c7-418a-829a-fb9044f5d123 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.391973884Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=088d36e3-e2c7-418a-829a-fb9044f5d123 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.392519327Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=088d36e3-e2c7-418a-829a-fb9044f5d123 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.393884833Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=33fd96b0-1983-4bd3-afc2-0e9583ccb2f7 name=/runtime.v1.ImageService/PullImage
	Jan 11 08:17:04 addons-328805 crio[828]: time="2026-01-11T08:17:04.394324593Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.617273001Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=33fd96b0-1983-4bd3-afc2-0e9583ccb2f7 name=/runtime.v1.ImageService/PullImage
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.618227827Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9fb2616a-44a5-4cca-b365-9a5058ef8dfa name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.620282885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=17416468-f55f-4440-8a0a-579d2b0605fe name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.627688286Z" level=info msg="Creating container: default/busybox/busybox" id=2f0af8f6-9dc2-4529-a3f3-639727ad0613 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.627815582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.634710331Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.635406512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.65157995Z" level=info msg="Created container 730d07e6961cfc6e4d6e18823607d2ae6e104fc921166e8976b08f31552c40c4: default/busybox/busybox" id=2f0af8f6-9dc2-4529-a3f3-639727ad0613 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.652841201Z" level=info msg="Starting container: 730d07e6961cfc6e4d6e18823607d2ae6e104fc921166e8976b08f31552c40c4" id=692c26e4-9f84-4a45-a0ee-82136cffde46 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 08:17:06 addons-328805 crio[828]: time="2026-01-11T08:17:06.655073984Z" level=info msg="Started container" PID=5043 containerID=730d07e6961cfc6e4d6e18823607d2ae6e104fc921166e8976b08f31552c40c4 description=default/busybox/busybox id=692c26e4-9f84-4a45-a0ee-82136cffde46 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7ab5f1f9415e46a93d46415b0ec3fc741748dc891bff551d56ca837f745d9f9
	Jan 11 08:17:08 addons-328805 crio[828]: time="2026-01-11T08:17:08.517244158Z" level=info msg="Checking image status: nvcr.io/nvidia/k8s-device-plugin:v0.18.1@sha256:50ac011ab941ab0140d52f56aa0e2fdc553bca96836ab3b26be394fc823fd9e7" id=88d12b7c-808f-4df4-a0ad-d661739659a9 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:17:08 addons-328805 crio[828]: time="2026-01-11T08:17:08.517450003Z" level=info msg="Image nvcr.io/nvidia/k8s-device-plugin:v0.18.1@sha256:50ac011ab941ab0140d52f56aa0e2fdc553bca96836ab3b26be394fc823fd9e7 not found" id=88d12b7c-808f-4df4-a0ad-d661739659a9 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:17:08 addons-328805 crio[828]: time="2026-01-11T08:17:08.517906927Z" level=info msg="Neither image nor artfiact nvcr.io/nvidia/k8s-device-plugin:v0.18.1@sha256:50ac011ab941ab0140d52f56aa0e2fdc553bca96836ab3b26be394fc823fd9e7 found" id=88d12b7c-808f-4df4-a0ad-d661739659a9 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:17:08 addons-328805 crio[828]: time="2026-01-11T08:17:08.519949324Z" level=info msg="Pulling image: nvcr.io/nvidia/k8s-device-plugin:v0.18.1@sha256:50ac011ab941ab0140d52f56aa0e2fdc553bca96836ab3b26be394fc823fd9e7" id=1b137b9b-3b1d-40f0-a0f6-5de5f44c8cb3 name=/runtime.v1.ImageService/PullImage
	Jan 11 08:17:08 addons-328805 crio[828]: time="2026-01-11T08:17:08.520420157Z" level=info msg="Trying to access \"nvcr.io/nvidia/k8s-device-plugin@sha256:50ac011ab941ab0140d52f56aa0e2fdc553bca96836ab3b26be394fc823fd9e7\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	730d07e6961cf       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   c7ab5f1f9415e       busybox                                     default
	bb52cdcdf2393       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          14 seconds ago       Running             csi-snapshotter                          0                   6803db3e4b305       csi-hostpathplugin-mlwmm                    kube-system
	460c09517af25       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          15 seconds ago       Running             csi-provisioner                          0                   6803db3e4b305       csi-hostpathplugin-mlwmm                    kube-system
	99f9ea965aa44       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            17 seconds ago       Running             liveness-probe                           0                   6803db3e4b305       csi-hostpathplugin-mlwmm                    kube-system
	063615d38eaa0       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   6803db3e4b305       csi-hostpathplugin-mlwmm                    kube-system
	539e50d75f00e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 About a minute ago   Running             gcp-auth                                 0                   500ec70369fc5       gcp-auth-5bbcf684b5-ndddh                   gcp-auth
	ae32c30d2bbd5       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             About a minute ago   Running             controller                               0                   2a32a6e1b2eba       ingress-nginx-controller-7847b5c79c-fjvtg   ingress-nginx
	84537b1c3e760       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   6803db3e4b305       csi-hostpathplugin-mlwmm                    kube-system
	f7e4abc8d163a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:d72bd468a5addb0c00bee32b564fe51e54a7e83195da28701dc4e8e1e019ae08                            About a minute ago   Running             gadget                                   0                   cb2a00ddc7c00       gadget-858g6                                gadget
	19de157b5a3ae       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   80ea32b70e944       registry-proxy-dksf6                        kube-system
	0635f7e73bbf1       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   60a6cd8fc409c       csi-hostpath-resizer-0                      kube-system
	204e7393acc00       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   fa378f5776c7b       csi-hostpath-attacher-0                     kube-system
	3d3e46b48a9ca       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    0                   bc9d1d7ec0aa8       ingress-nginx-admission-patch-l4rjb         ingress-nginx
	aeb193327e0b9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   9885adf60d7dd       snapshot-controller-6588d87457-rqdsw        kube-system
	543a3169c0f5d       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   c0c0bc6f85ca6       registry-788cd7d5bc-8s2dv                   kube-system
	5ff7a6909d555       ghcr.io/manusa/yakd@sha256:68bfcea671292190cdd2b127455726ac24794d1f7c55ce74c33d4648a3a0f50b                                                  About a minute ago   Running             yakd                                     0                   acab692d13020       yakd-dashboard-7bcf5795cd-5582j             yakd-dashboard
	361b88e03c65f       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   73bc5563df018       local-path-provisioner-c44bcd496-6zpmt      local-path-storage
	2ac3b5eadf88b       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   d27eeb973d8f7       snapshot-controller-6588d87457-szpvw        kube-system
	64ea7483ae060       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   6803db3e4b305       csi-hostpathplugin-mlwmm                    kube-system
	c6bd3f30b6ca6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   2 minutes ago        Exited              create                                   0                   1bd46eef58278       ingress-nginx-admission-create-rs2wb        ingress-nginx
	2088bf5377a9b       gcr.io/cloud-spanner-emulator/emulator@sha256:084e511546640743b2d25fe2ee59800bc7ec910acfc12175bad2270f159f5eba                               2 minutes ago        Running             cloud-spanner-emulator                   0                   3a8382b68a6f7       cloud-spanner-emulator-5649ccbc87-dw85r     default
	fa285db56145c       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        2 minutes ago        Running             metrics-server                           0                   42aa9c2a9a61d       metrics-server-5778bb4788-gbb2s             kube-system
	5ccd2254d43c3       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               2 minutes ago        Running             minikube-ingress-dns                     0                   bd8591958828e       kube-ingress-dns-minikube                   kube-system
	9f08c00a9e5cb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             2 minutes ago        Running             storage-provisioner                      0                   ac9ff31e70615       storage-provisioner                         kube-system
	20073a807f1d5       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                                                             2 minutes ago        Running             coredns                                  0                   fbe5f467f4d65       coredns-7d764666f9-sgsk9                    kube-system
	2f5531b121ed5       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           2 minutes ago        Running             kindnet-cni                              0                   c4c502f46ed2e       kindnet-qdjqq                               kube-system
	e703aa2a2f4ba       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                                                             2 minutes ago        Running             kube-proxy                               0                   11b892e098de5       kube-proxy-lmsq4                            kube-system
	53d7d47c8ab1f       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                                                             2 minutes ago        Running             kube-scheduler                           0                   e387cd28ad792       kube-scheduler-addons-328805                kube-system
	2c030dbbc2adf       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                                                             2 minutes ago        Running             kube-controller-manager                  0                   024c119a75d7e       kube-controller-manager-addons-328805       kube-system
	9727839e3a9c5       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                                                             2 minutes ago        Running             kube-apiserver                           0                   5eba92c6973bd       kube-apiserver-addons-328805                kube-system
	1ff80bbdd9d61       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                                                             2 minutes ago        Running             etcd                                     0                   7cd583b4a6c62       etcd-addons-328805                          kube-system
	
	
	==> coredns [20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4] <==
	[INFO] 10.244.0.17:36366 - 49553 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000145273s
	[INFO] 10.244.0.17:36366 - 50294 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.006152348s
	[INFO] 10.244.0.17:36366 - 56629 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.006309158s
	[INFO] 10.244.0.17:36366 - 17752 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000169496s
	[INFO] 10.244.0.17:36366 - 64827 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000127091s
	[INFO] 10.244.0.17:32899 - 57311 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000292032s
	[INFO] 10.244.0.17:32899 - 57048 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000257521s
	[INFO] 10.244.0.17:58102 - 6342 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000127419s
	[INFO] 10.244.0.17:58102 - 6129 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000209266s
	[INFO] 10.244.0.17:50781 - 10725 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097774s
	[INFO] 10.244.0.17:50781 - 10914 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164277s
	[INFO] 10.244.0.17:51985 - 22681 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001645912s
	[INFO] 10.244.0.17:51985 - 22903 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001783653s
	[INFO] 10.244.0.17:55636 - 40395 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000161652s
	[INFO] 10.244.0.17:55636 - 39975 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143977s
	[INFO] 10.244.0.20:41902 - 12738 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000158714s
	[INFO] 10.244.0.20:60220 - 56014 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000114299s
	[INFO] 10.244.0.20:38910 - 26101 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000080198s
	[INFO] 10.244.0.20:40364 - 9416 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000087115s
	[INFO] 10.244.0.20:40713 - 8727 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138513s
	[INFO] 10.244.0.20:56122 - 38290 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007886s
	[INFO] 10.244.0.20:51940 - 36131 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00264823s
	[INFO] 10.244.0.20:59558 - 32609 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002713437s
	[INFO] 10.244.0.20:55496 - 20652 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001534288s
	[INFO] 10.244.0.20:47847 - 17192 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00233904s
	
	
	==> describe nodes <==
	Name:               addons-328805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-328805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=addons-328805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T08_14_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-328805
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-328805"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 08:14:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-328805
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 08:17:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 08:16:57 +0000   Sun, 11 Jan 2026 08:14:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 08:16:57 +0000   Sun, 11 Jan 2026 08:14:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 08:16:57 +0000   Sun, 11 Jan 2026 08:14:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 08:16:57 +0000   Sun, 11 Jan 2026 08:14:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-328805
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                73157eb7-46e6-4a4e-ba43-0acdc6aef011
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5649ccbc87-dw85r      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gadget                      gadget-858g6                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gcp-auth                    gcp-auth-5bbcf684b5-ndddh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-fjvtg    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m28s
	  kube-system                 coredns-7d764666f9-sgsk9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m35s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 csi-hostpathplugin-mlwmm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 etcd-addons-328805                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m41s
	  kube-system                 kindnet-qdjqq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m36s
	  kube-system                 kube-apiserver-addons-328805                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-controller-manager-addons-328805        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-lmsq4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-scheduler-addons-328805                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 metrics-server-5778bb4788-gbb2s              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m29s
	  kube-system                 nvidia-device-plugin-daemonset-gjpvb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 registry-788cd7d5bc-8s2dv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 registry-creds-567fb78d95-75qnp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 registry-proxy-dksf6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 snapshot-controller-6588d87457-rqdsw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 snapshot-controller-6588d87457-szpvw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  local-path-storage          local-path-provisioner-c44bcd496-6zpmt       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-5582j              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  2m37s  node-controller  Node addons-328805 event: Registered Node addons-328805 in Controller
	
	
	==> dmesg <==
	[ +33.313984] overlayfs: idmapped layers are currently not supported
	[Jan11 07:45] overlayfs: idmapped layers are currently not supported
	[Jan11 07:46] overlayfs: idmapped layers are currently not supported
	[Jan11 07:50] overlayfs: idmapped layers are currently not supported
	[Jan11 07:51] overlayfs: idmapped layers are currently not supported
	[Jan11 07:56] overlayfs: idmapped layers are currently not supported
	[ +35.303321] overlayfs: idmapped layers are currently not supported
	[Jan11 07:58] overlayfs: idmapped layers are currently not supported
	[Jan11 07:59] overlayfs: idmapped layers are currently not supported
	[Jan11 08:00] overlayfs: idmapped layers are currently not supported
	[Jan11 08:02] overlayfs: idmapped layers are currently not supported
	[Jan11 08:03] overlayfs: idmapped layers are currently not supported
	[ +52.666146] overlayfs: idmapped layers are currently not supported
	[Jan11 08:04] overlayfs: idmapped layers are currently not supported
	[ +24.362091] overlayfs: idmapped layers are currently not supported
	[  +2.288164] overlayfs: idmapped layers are currently not supported
	[Jan11 08:05] overlayfs: idmapped layers are currently not supported
	[Jan11 08:06] overlayfs: idmapped layers are currently not supported
	[Jan11 08:07] overlayfs: idmapped layers are currently not supported
	[Jan11 08:08] overlayfs: idmapped layers are currently not supported
	[ +56.631315] overlayfs: idmapped layers are currently not supported
	[Jan11 08:09] overlayfs: idmapped layers are currently not supported
	[Jan11 08:10] overlayfs: idmapped layers are currently not supported
	[Jan11 08:13] kauditd_printk_skb: 8 callbacks suppressed
	[Jan11 08:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c] <==
	{"level":"info","ts":"2026-01-11T08:14:29.250515Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T08:14:30.009631Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-11T08:14:30.009794Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T08:14:30.009858Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2026-01-11T08:14:30.009882Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T08:14:30.009899Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T08:14:30.011630Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2026-01-11T08:14:30.011687Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T08:14:30.011714Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2026-01-11T08:14:30.011729Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2026-01-11T08:14:30.013553Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T08:14:30.015502Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-328805 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T08:14:30.015678Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T08:14:30.020003Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T08:14:30.020931Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T08:14:30.021023Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T08:14:30.021400Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T08:14:30.022452Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T08:14:30.022637Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T08:14:30.022860Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T08:14:30.022689Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T08:14:30.022993Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T08:14:30.023081Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T08:14:30.026368Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T08:14:30.027553Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [539e50d75f00e5ae9a8b76809b852f2454673a61cc13205d1a41898d1999d264] <==
	2026/01/11 08:15:51 GCP Auth Webhook started!
	2026/01/11 08:17:03 Ready to marshal response ...
	2026/01/11 08:17:03 Ready to write response ...
	2026/01/11 08:17:04 Ready to marshal response ...
	2026/01/11 08:17:04 Ready to write response ...
	2026/01/11 08:17:04 Ready to marshal response ...
	2026/01/11 08:17:04 Ready to write response ...
	
	
	==> kernel <==
	 08:17:15 up  2:59,  0 user,  load average: 1.21, 2.15, 2.35
	Linux addons-328805 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0] <==
	I0111 08:15:14.743332       1 main.go:301] handling current node
	I0111 08:15:24.743031       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:15:24.743066       1 main.go:301] handling current node
	I0111 08:15:34.743298       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:15:34.743362       1 main.go:301] handling current node
	I0111 08:15:44.743263       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:15:44.743299       1 main.go:301] handling current node
	I0111 08:15:54.742507       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:15:54.742542       1 main.go:301] handling current node
	I0111 08:16:04.743025       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:16:04.743078       1 main.go:301] handling current node
	I0111 08:16:14.750102       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:16:14.750166       1 main.go:301] handling current node
	I0111 08:16:24.747944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:16:24.747979       1 main.go:301] handling current node
	I0111 08:16:34.752124       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:16:34.752162       1 main.go:301] handling current node
	I0111 08:16:44.745219       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:16:44.745254       1 main.go:301] handling current node
	I0111 08:16:54.748342       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:16:54.748383       1 main.go:301] handling current node
	I0111 08:17:04.743370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:17:04.743452       1 main.go:301] handling current node
	I0111 08:17:14.747485       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0111 08:17:14.747539       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9] <==
	I0111 08:14:47.554315       1 alloc.go:329] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.101.95.151"}
	W0111 08:14:47.824719       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0111 08:14:47.847195       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I0111 08:14:50.707560       1 alloc.go:329] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.158.176"}
	W0111 08:14:54.819969       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.158.176:443: connect: connection refused
	E0111 08:14:54.820022       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.158.176:443: connect: connection refused" logger="UnhandledError"
	W0111 08:14:54.820112       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.158.176:443: connect: connection refused
	E0111 08:14:54.820193       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.158.176:443: connect: connection refused" logger="UnhandledError"
	W0111 08:14:54.909642       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.158.176:443: connect: connection refused
	E0111 08:14:54.919856       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.158.176:443: connect: connection refused" logger="UnhandledError"
	W0111 08:15:08.991281       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0111 08:15:09.020762       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0111 08:15:09.066693       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0111 08:15:09.092263       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E0111 08:15:19.083695       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.202.46:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.202.46:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.202.46:443: connect: connection refused" logger="UnhandledError"
	W0111 08:15:19.084448       1 handler_proxy.go:99] no RequestInfo found in the context
	E0111 08:15:19.084513       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0111 08:15:19.085490       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.202.46:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.202.46:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.202.46:443: connect: connection refused" logger="UnhandledError"
	E0111 08:15:19.090276       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.202.46:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.202.46:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.202.46:443: connect: connection refused" logger="UnhandledError"
	I0111 08:15:19.209502       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0111 08:17:13.505899       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41580: use of closed network connection
	E0111 08:17:13.636610       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41588: use of closed network connection
	
	
	==> kube-controller-manager [2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512] <==
	I0111 08:14:38.937204       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.937210       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.937216       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.939391       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.939415       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.939440       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.940398       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.941179       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.941488       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.941510       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:38.949972       1 range_allocator.go:433] "Set node PodCIDR" node="addons-328805" podCIDRs=["10.244.0.0/24"]
	I0111 08:14:38.968386       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:39.019934       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:39.021192       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:39.021276       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 08:14:39.021307       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E0111 08:14:46.337458       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I0111 08:14:58.924837       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	E0111 08:15:08.974117       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0111 08:15:08.974304       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0111 08:15:08.974345       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:15:09.034150       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0111 08:15:09.053261       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:15:09.075024       1 shared_informer.go:377] "Caches are synced"
	I0111 08:15:09.154022       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e] <==
	I0111 08:14:41.599841       1 server_linux.go:53] "Using iptables proxy"
	I0111 08:14:41.695693       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:14:41.796559       1 shared_informer.go:377] "Caches are synced"
	I0111 08:14:41.796597       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0111 08:14:41.796662       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 08:14:41.860845       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 08:14:41.860914       1 server_linux.go:136] "Using iptables Proxier"
	I0111 08:14:41.868914       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 08:14:41.869195       1 server.go:529] "Version info" version="v1.35.0"
	I0111 08:14:41.869209       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 08:14:41.875061       1 config.go:200] "Starting service config controller"
	I0111 08:14:41.875079       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 08:14:41.875096       1 config.go:106] "Starting endpoint slice config controller"
	I0111 08:14:41.875100       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 08:14:41.875109       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 08:14:41.875113       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 08:14:41.877735       1 config.go:309] "Starting node config controller"
	I0111 08:14:41.877753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 08:14:41.877761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 08:14:41.975211       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 08:14:41.975240       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 08:14:41.975274       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0] <==
	E0111 08:14:32.178691       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 08:14:32.178778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 08:14:32.178912       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 08:14:32.179341       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 08:14:32.180982       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 08:14:32.180929       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 08:14:32.181054       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 08:14:32.180861       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 08:14:32.181235       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 08:14:32.181291       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 08:14:32.181323       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 08:14:32.181425       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 08:14:32.181476       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 08:14:32.181530       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 08:14:32.181573       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 08:14:32.181595       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 08:14:33.008747       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 08:14:33.119358       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 08:14:33.132276       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 08:14:33.190939       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 08:14:33.193662       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 08:14:33.194292       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0111 08:14:33.205071       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 08:14:33.317966       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	I0111 08:14:35.250782       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 08:16:18 addons-328805 kubelet[1258]: E0111 08:16:18.516330    1258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-addons-328805" containerName="kube-apiserver"
	Jan 11 08:16:19 addons-328805 kubelet[1258]: E0111 08:16:19.515824    1258 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-addons-328805" containerName="kube-controller-manager"
	Jan 11 08:16:27 addons-328805 kubelet[1258]: E0111 08:16:27.516416    1258 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-sgsk9" containerName="coredns"
	Jan 11 08:16:34 addons-328805 kubelet[1258]: I0111 08:16:34.561502    1258 scope.go:122] "RemoveContainer" containerID="ffe06bc5f83c50b3fe953611f5b42adfcd3bcafe4573998e255ab10f725362a4"
	Jan 11 08:16:34 addons-328805 kubelet[1258]: I0111 08:16:34.569754    1258 scope.go:122] "RemoveContainer" containerID="8ff0a0b39c4c7a87a8af0911d262477e6dea8e34ff00a0761b36df972fea6bb7"
	Jan 11 08:16:39 addons-328805 kubelet[1258]: E0111 08:16:39.516446    1258 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5778bb4788-gbb2s" containerName="metrics-server"
	Jan 11 08:16:43 addons-328805 kubelet[1258]: I0111 08:16:43.516378    1258 kubelet_pods.go:1079] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-788cd7d5bc-8s2dv" secret="" err="secret \"gcp-auth\" not found"
	Jan 11 08:16:45 addons-328805 kubelet[1258]: E0111 08:16:45.515978    1258 prober_manager.go:209] "Readiness probe already exists for container" pod="yakd-dashboard/yakd-dashboard-7bcf5795cd-5582j" containerName="yakd"
	Jan 11 08:16:55 addons-328805 kubelet[1258]: I0111 08:16:55.516277    1258 kubelet_pods.go:1079] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dksf6" secret="" err="secret \"gcp-auth\" not found"
	Jan 11 08:16:56 addons-328805 kubelet[1258]: E0111 08:16:56.806400    1258 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = reference \"[overlay@/var/lib/containers/storage+/run/containers/storage]nvcr.io/nvidia/k8s-device-plugin@sha256:cc09f9025a3d03e2452b6765bab67b0e74c26d05e882ecd3017d7eb355ae426a\" does not resolve to an image ID" podSandboxID="596faaa65d8bcd51016802ddcc8b7d4857046ea444741a61305592f45cb21855"
	Jan 11 08:16:56 addons-328805 kubelet[1258]: E0111 08:16:56.806891    1258 kuberuntime_manager.go:1664] "Unhandled Error" err="container nvidia-device-plugin-ctr start failed in pod nvidia-device-plugin-daemonset-gjpvb_kube-system(290d35b5-ee15-4b67-8a71-9b269776d8c4): CreateContainerError: reference \"[overlay@/var/lib/containers/storage+/run/containers/storage]nvcr.io/nvidia/k8s-device-plugin@sha256:cc09f9025a3d03e2452b6765bab67b0e74c26d05e882ecd3017d7eb355ae426a\" does not resolve to an image ID" logger="UnhandledError"
	Jan 11 08:16:56 addons-328805 kubelet[1258]: E0111 08:16:56.806946    1258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nvidia-device-plugin-ctr\" with CreateContainerError: \"reference \\\"[overlay@/var/lib/containers/storage+/run/containers/storage]nvcr.io/nvidia/k8s-device-plugin@sha256:cc09f9025a3d03e2452b6765bab67b0e74c26d05e882ecd3017d7eb355ae426a\\\" does not resolve to an image ID\"" pod="kube-system/nvidia-device-plugin-daemonset-gjpvb" podUID="290d35b5-ee15-4b67-8a71-9b269776d8c4"
	Jan 11 08:16:57 addons-328805 kubelet[1258]: E0111 08:16:57.879099    1258 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-567fb78d95-75qnp" podUID="aa246809-0cd0-490d-a889-84860eb2548e"
	Jan 11 08:17:01 addons-328805 kubelet[1258]: E0111 08:17:01.595102    1258 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-mlwmm" containerName="hostpath"
	Jan 11 08:17:01 addons-328805 kubelet[1258]: I0111 08:17:01.617838    1258 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-mlwmm" podStartSLOduration=2.016596798 podStartE2EDuration="2m7.617801375s" podCreationTimestamp="2026-01-11 08:14:54 +0000 UTC" firstStartedPulling="2026-01-11 08:14:55.685099777 +0000 UTC m=+21.288129767" lastFinishedPulling="2026-01-11 08:17:01.286304354 +0000 UTC m=+146.889334344" observedRunningTime="2026-01-11 08:17:01.614932466 +0000 UTC m=+147.217962455" watchObservedRunningTime="2026-01-11 08:17:01.617801375 +0000 UTC m=+147.220831365"
	Jan 11 08:17:02 addons-328805 kubelet[1258]: E0111 08:17:02.515948    1258 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-fjvtg" containerName="controller"
	Jan 11 08:17:02 addons-328805 kubelet[1258]: E0111 08:17:02.599246    1258 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-mlwmm" containerName="hostpath"
	Jan 11 08:17:02 addons-328805 kubelet[1258]: E0111 08:17:02.827238    1258 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Jan 11 08:17:02 addons-328805 kubelet[1258]: E0111 08:17:02.827335    1258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa246809-0cd0-490d-a889-84860eb2548e-gcr-creds podName:aa246809-0cd0-490d-a889-84860eb2548e nodeName:}" failed. No retries permitted until 2026-01-11 08:19:04.827317256 +0000 UTC m=+270.430347254 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/aa246809-0cd0-490d-a889-84860eb2548e-gcr-creds") pod "registry-creds-567fb78d95-75qnp" (UID: "aa246809-0cd0-490d-a889-84860eb2548e") : secret "registry-creds-gcr" not found
	Jan 11 08:17:04 addons-328805 kubelet[1258]: I0111 08:17:04.140849    1258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7wnm\" (UniqueName: \"kubernetes.io/projected/43d6c10f-542c-4028-be58-ea31a363fd10-kube-api-access-x7wnm\") pod \"busybox\" (UID: \"43d6c10f-542c-4028-be58-ea31a363fd10\") " pod="default/busybox"
	Jan 11 08:17:04 addons-328805 kubelet[1258]: I0111 08:17:04.140910    1258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/43d6c10f-542c-4028-be58-ea31a363fd10-gcp-creds\") pod \"busybox\" (UID: \"43d6c10f-542c-4028-be58-ea31a363fd10\") " pod="default/busybox"
	Jan 11 08:17:08 addons-328805 kubelet[1258]: I0111 08:17:08.516206    1258 kubelet_pods.go:1079] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gjpvb" secret="" err="secret \"gcp-auth\" not found"
	Jan 11 08:17:08 addons-328805 kubelet[1258]: I0111 08:17:08.538681    1258 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.312570038 podStartE2EDuration="4.538663864s" podCreationTimestamp="2026-01-11 08:17:04 +0000 UTC" firstStartedPulling="2026-01-11 08:17:04.393047153 +0000 UTC m=+149.996077143" lastFinishedPulling="2026-01-11 08:17:06.619140971 +0000 UTC m=+152.222170969" observedRunningTime="2026-01-11 08:17:07.635025214 +0000 UTC m=+153.238055203" watchObservedRunningTime="2026-01-11 08:17:08.538663864 +0000 UTC m=+154.141693879"
	Jan 11 08:17:13 addons-328805 kubelet[1258]: E0111 08:17:13.506535    1258 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50024->127.0.0.1:33135: write tcp 127.0.0.1:50024->127.0.0.1:33135: write: broken pipe
	Jan 11 08:17:14 addons-328805 kubelet[1258]: E0111 08:17:14.519529    1258 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-858g6" containerName="gadget"
	
	
	==> storage-provisioner [9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2] <==
	W0111 08:16:50.697803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:16:52.700717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:16:52.707614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:16:54.710665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:16:54.717395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:16:56.721159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:16:56.726657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:16:58.730753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:16:58.735541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:00.739428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:00.745272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:02.748159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:02.753423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:04.756625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:04.763072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:06.766284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:06.773355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:08.776739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:08.781697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:10.784255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:10.788677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:12.791167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:12.797833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:14.801696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 08:17:14.807454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-328805 -n addons-328805
helpers_test.go:270: (dbg) Run:  kubectl --context addons-328805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-rs2wb ingress-nginx-admission-patch-l4rjb nvidia-device-plugin-daemonset-gjpvb registry-creds-567fb78d95-75qnp
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-328805 describe pod ingress-nginx-admission-create-rs2wb ingress-nginx-admission-patch-l4rjb nvidia-device-plugin-daemonset-gjpvb registry-creds-567fb78d95-75qnp
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-328805 describe pod ingress-nginx-admission-create-rs2wb ingress-nginx-admission-patch-l4rjb nvidia-device-plugin-daemonset-gjpvb registry-creds-567fb78d95-75qnp: exit status 1 (89.569566ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rs2wb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-l4rjb" not found
	Error from server (NotFound): pods "nvidia-device-plugin-daemonset-gjpvb" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-75qnp" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-328805 describe pod ingress-nginx-admission-create-rs2wb ingress-nginx-admission-patch-l4rjb nvidia-device-plugin-daemonset-gjpvb registry-creds-567fb78d95-75qnp: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable headlamp --alsologtostderr -v=1: exit status 11 (254.076161ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:17:16.779753  584314 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:17:16.781037  584314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:17:16.781104  584314 out.go:374] Setting ErrFile to fd 2...
	I0111 08:17:16.781128  584314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:17:16.781641  584314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:17:16.782052  584314 mustload.go:66] Loading cluster: addons-328805
	I0111 08:17:16.782763  584314 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:17:16.782811  584314 addons.go:622] checking whether the cluster is paused
	I0111 08:17:16.782992  584314 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:17:16.783035  584314 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:17:16.783815  584314 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:17:16.801937  584314 ssh_runner.go:195] Run: systemctl --version
	I0111 08:17:16.801992  584314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:17:16.825912  584314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:17:16.928737  584314 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:17:16.928832  584314 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:17:16.958035  584314 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:17:16.958058  584314 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:17:16.958063  584314 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:17:16.958067  584314 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:17:16.958071  584314 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:17:16.958074  584314 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:17:16.958077  584314 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:17:16.958080  584314 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:17:16.958093  584314 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:17:16.958103  584314 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:17:16.958110  584314 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:17:16.958113  584314 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:17:16.958117  584314 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:17:16.958120  584314 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:17:16.958147  584314 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:17:16.958163  584314 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:17:16.958167  584314 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:17:16.958171  584314 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:17:16.958174  584314 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:17:16.958177  584314 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:17:16.958183  584314 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:17:16.958186  584314 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:17:16.958189  584314 cri.go:96] found id: ""
	I0111 08:17:16.958250  584314 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:17:16.974616  584314 out.go:203] 
	W0111 08:17:16.977590  584314 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:17:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:17:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:17:16.977621  584314 out.go:285] * 
	* 
	W0111 08:17:16.981665  584314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:17:16.984581  584314 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.09s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-dw85r" [6ebae978-b173-44bc-b749-54f25b01c4d0] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003673147s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (259.557082ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:18:18.582667  585314 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:18:18.583462  585314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:18.583480  585314 out.go:374] Setting ErrFile to fd 2...
	I0111 08:18:18.583487  585314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:18.583780  585314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:18:18.584088  585314 mustload.go:66] Loading cluster: addons-328805
	I0111 08:18:18.584585  585314 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:18.584615  585314 addons.go:622] checking whether the cluster is paused
	I0111 08:18:18.584770  585314 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:18.584807  585314 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:18:18.585328  585314 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:18:18.609328  585314 ssh_runner.go:195] Run: systemctl --version
	I0111 08:18:18.609437  585314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:18:18.626484  585314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:18:18.735121  585314 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:18:18.735224  585314 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:18:18.765711  585314 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:18:18.765734  585314 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:18:18.765739  585314 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:18:18.765743  585314 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:18:18.765746  585314 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:18:18.765750  585314 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:18:18.765753  585314 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:18:18.765764  585314 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:18:18.765768  585314 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:18:18.765781  585314 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:18:18.765791  585314 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:18:18.765794  585314 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:18:18.765797  585314 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:18:18.765800  585314 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:18:18.765804  585314 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:18:18.765818  585314 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:18:18.765822  585314 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:18:18.765827  585314 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:18:18.765830  585314 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:18:18.765840  585314 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:18:18.765849  585314 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:18:18.765875  585314 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:18:18.765885  585314 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:18:18.765889  585314 cri.go:96] found id: ""
	I0111 08:18:18.765951  585314 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:18:18.781573  585314 out.go:203] 
	W0111 08:18:18.784441  585314 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:18:18.784466  585314 out.go:285] * 
	* 
	W0111 08:18:18.788530  585314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:18:18.791505  585314 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-328805 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-328805 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-328805 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [e57bbfe2-db63-4a70-90bd-bf286af5288c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [e57bbfe2-db63-4a70-90bd-bf286af5288c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [e57bbfe2-db63-4a70-90bd-bf286af5288c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003189317s
addons_test.go:969: (dbg) Run:  kubectl --context addons-328805 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 ssh "cat /opt/local-path-provisioner/pvc-788f2ee5-6efd-46e5-b89e-b326dbda5d9b_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-328805 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-328805 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (295.473528ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:18:19.537350  585435 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:18:19.538266  585435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:19.538313  585435 out.go:374] Setting ErrFile to fd 2...
	I0111 08:18:19.538335  585435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:19.538627  585435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:18:19.539009  585435 mustload.go:66] Loading cluster: addons-328805
	I0111 08:18:19.539444  585435 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:19.539494  585435 addons.go:622] checking whether the cluster is paused
	I0111 08:18:19.539631  585435 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:19.539674  585435 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:18:19.540276  585435 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:18:19.560514  585435 ssh_runner.go:195] Run: systemctl --version
	I0111 08:18:19.560574  585435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:18:19.579343  585435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:18:19.688589  585435 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:18:19.688714  585435 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:18:19.730450  585435 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:18:19.730472  585435 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:18:19.730478  585435 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:18:19.730482  585435 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:18:19.730486  585435 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:18:19.730489  585435 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:18:19.730492  585435 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:18:19.730495  585435 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:18:19.730498  585435 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:18:19.730503  585435 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:18:19.730507  585435 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:18:19.730510  585435 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:18:19.730513  585435 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:18:19.730522  585435 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:18:19.730526  585435 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:18:19.730531  585435 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:18:19.730534  585435 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:18:19.730538  585435 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:18:19.730541  585435 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:18:19.730544  585435 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:18:19.730547  585435 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:18:19.730549  585435 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:18:19.730552  585435 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:18:19.730555  585435 cri.go:96] found id: ""
	I0111 08:18:19.730605  585435 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:18:19.757699  585435 out.go:203] 
	W0111 08:18:19.761026  585435 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:18:19.761111  585435 out.go:285] * 
	* 
	W0111 08:18:19.765321  585435 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:18:19.768662  585435 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (50.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-gjpvb" [290d35b5-ee15-4b67-8a71-9b269776d8c4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
helpers_test.go:353: "nvidia-device-plugin-daemonset-gjpvb" [290d35b5-ee15-4b67-8a71-9b269776d8c4] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 50.003295588s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (265.399329ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:18:13.309228  585149 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:18:13.310186  585149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:13.310227  585149 out.go:374] Setting ErrFile to fd 2...
	I0111 08:18:13.310249  585149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:18:13.310647  585149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:18:13.311059  585149 mustload.go:66] Loading cluster: addons-328805
	I0111 08:18:13.311739  585149 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:13.311791  585149 addons.go:622] checking whether the cluster is paused
	I0111 08:18:13.312438  585149 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:18:13.312509  585149 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:18:13.313041  585149 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:18:13.330762  585149 ssh_runner.go:195] Run: systemctl --version
	I0111 08:18:13.330823  585149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:18:13.351065  585149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:18:13.456927  585149 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:18:13.457003  585149 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:18:13.494557  585149 cri.go:96] found id: "32ae440548e76057587bf0d296846dddcdab9df1ad2abd21f57699d174d18a11"
	I0111 08:18:13.494580  585149 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:18:13.494585  585149 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:18:13.494589  585149 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:18:13.494593  585149 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:18:13.494596  585149 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:18:13.494600  585149 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:18:13.494604  585149 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:18:13.494607  585149 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:18:13.494618  585149 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:18:13.494625  585149 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:18:13.494628  585149 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:18:13.494633  585149 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:18:13.494640  585149 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:18:13.494643  585149 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:18:13.494648  585149 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:18:13.494652  585149 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:18:13.494656  585149 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:18:13.494663  585149 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:18:13.494670  585149 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:18:13.494675  585149 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:18:13.494679  585149 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:18:13.494682  585149 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:18:13.494685  585149 cri.go:96] found id: ""
	I0111 08:18:13.494735  585149 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:18:13.510288  585149 out.go:203] 
	W0111 08:18:13.513350  585149 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:18:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:18:13.513391  585149 out.go:285] * 
	* 
	W0111 08:18:13.517557  585149 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:18:13.520457  585149 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (50.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-5582j" [bec4744c-4e0e-415d-a7de-e6b0a29645ed] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003603786s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-328805 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-328805 addons disable yakd --alsologtostderr -v=1: exit status 11 (263.217182ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:17:23.048346  584390 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:17:23.049225  584390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:17:23.049259  584390 out.go:374] Setting ErrFile to fd 2...
	I0111 08:17:23.049286  584390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:17:23.049710  584390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:17:23.050215  584390 mustload.go:66] Loading cluster: addons-328805
	I0111 08:17:23.050910  584390 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:17:23.050944  584390 addons.go:622] checking whether the cluster is paused
	I0111 08:17:23.051372  584390 config.go:182] Loaded profile config "addons-328805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:17:23.051417  584390 host.go:66] Checking if "addons-328805" exists ...
	I0111 08:17:23.051988  584390 cli_runner.go:164] Run: docker container inspect addons-328805 --format={{.State.Status}}
	I0111 08:17:23.069855  584390 ssh_runner.go:195] Run: systemctl --version
	I0111 08:17:23.069919  584390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-328805
	I0111 08:17:23.087920  584390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/addons-328805/id_rsa Username:docker}
	I0111 08:17:23.193276  584390 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:17:23.193362  584390 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:17:23.227094  584390 cri.go:96] found id: "bb52cdcdf239370a739b718645b1847f1dc66bd68fc953a4d722814805bb4c16"
	I0111 08:17:23.227118  584390 cri.go:96] found id: "460c09517af25a44cf182d5cb888a44196d6161278c2cac023a488d207a067a4"
	I0111 08:17:23.227124  584390 cri.go:96] found id: "99f9ea965aa4486785a33476b312b808fb66fa414b6a11e0bd75e81ad1abad61"
	I0111 08:17:23.227128  584390 cri.go:96] found id: "063615d38eaa0e13db47a70072e6973dd06a62891cc59a5258dd6ac66ecea0bb"
	I0111 08:17:23.227132  584390 cri.go:96] found id: "84537b1c3e760f4e6e18467eda908997738fbb4b52823f59e1551972ef1381a7"
	I0111 08:17:23.227136  584390 cri.go:96] found id: "19de157b5a3ae088cfe765c6e2d9792fb1955c3f1c4e0897901780f813f95502"
	I0111 08:17:23.227139  584390 cri.go:96] found id: "0635f7e73bbf15851297adfa4d74a0b702662260c000f88ea2ffcfcb4f54adf6"
	I0111 08:17:23.227142  584390 cri.go:96] found id: "204e7393acc00c4c17372d0bd4be2da36974d0177a818d9d882aa0756ff943ab"
	I0111 08:17:23.227145  584390 cri.go:96] found id: "aeb193327e0b9a8d1d153fba0e1b35395a826dfd6ae35b57165aa6fbd73b2ada"
	I0111 08:17:23.227152  584390 cri.go:96] found id: "543a3169c0f5dc3dc60ff3bc36df2dcb05cc8ea8dcb55df152f3229324b8cee2"
	I0111 08:17:23.227156  584390 cri.go:96] found id: "2ac3b5eadf88be560edf7c876973fb3fc300f1d4c9ccb038d46cd74bdd36c2b0"
	I0111 08:17:23.227159  584390 cri.go:96] found id: "64ea7483ae06043671776f69543ba85bbc98cf7a607dbe90392fdf4b0aa40218"
	I0111 08:17:23.227162  584390 cri.go:96] found id: "fa285db56145cd883caf086ab617dd0340089e9a28dd8dbfe2042027b32ccdaf"
	I0111 08:17:23.227166  584390 cri.go:96] found id: "5ccd2254d43c39a684bc4e7742776f5e83d79e79425519595c086bce28586ae5"
	I0111 08:17:23.227169  584390 cri.go:96] found id: "9f08c00a9e5cbd9b622ebe2cda721e91226ed731f72a6c84aee7b0f5b222fee2"
	I0111 08:17:23.227178  584390 cri.go:96] found id: "20073a807f1d535196a337a149445bb92614247effea77f00fa6549a2eeb7bf4"
	I0111 08:17:23.227182  584390 cri.go:96] found id: "2f5531b121ed5a05108517eae7ea167cb987253f65d23931264036a73afa5fa0"
	I0111 08:17:23.227187  584390 cri.go:96] found id: "e703aa2a2f4ba2cd5d21b762cf74979a519e16854ee9b97ac62e62fecc02b64e"
	I0111 08:17:23.227194  584390 cri.go:96] found id: "53d7d47c8ab1fadea2e3aa64eb12051dc8609b687a433995300204ce451cecb0"
	I0111 08:17:23.227197  584390 cri.go:96] found id: "2c030dbbc2adf0280378e029a8f6728c0a15a1cc5637d38bcce21b4f3a47b512"
	I0111 08:17:23.227203  584390 cri.go:96] found id: "9727839e3a9c5a483dcb1962109477c1d27af29a8f6f0045103afab8fa29cdc9"
	I0111 08:17:23.227208  584390 cri.go:96] found id: "1ff80bbdd9d615fe0669bc1d595010976e6325990cc6a0ec828abd832915372c"
	I0111 08:17:23.227211  584390 cri.go:96] found id: ""
	I0111 08:17:23.227264  584390 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:17:23.242721  584390 out.go:203] 
	W0111 08:17:23.245574  584390 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:17:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:17:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:17:23.245613  584390 out.go:285] * 
	* 
	W0111 08:17:23.249777  584390 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:17:23.252516  584390 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-328805 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestForceSystemdFlag (505.61s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0111 09:00:11.589436  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m21.332441266s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-630015] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-630015" primary control-plane node in "force-systemd-flag-630015" cluster
	* Pulling base image v0.0.48-1768032998-22402 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:59:42.727417  757749 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:59:42.727561  757749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:59:42.727572  757749 out.go:374] Setting ErrFile to fd 2...
	I0111 08:59:42.727586  757749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:59:42.728228  757749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:59:42.728668  757749 out.go:368] Setting JSON to false
	I0111 08:59:42.729495  757749 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13333,"bootTime":1768108650,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:59:42.729566  757749 start.go:143] virtualization:  
	I0111 08:59:42.733115  757749 out.go:179] * [force-systemd-flag-630015] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:59:42.737693  757749 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:59:42.737848  757749 notify.go:221] Checking for updates...
	I0111 08:59:42.744442  757749 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:59:42.747693  757749 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:59:42.750736  757749 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:59:42.753823  757749 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:59:42.756890  757749 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:59:42.760416  757749 config.go:182] Loaded profile config "force-systemd-env-472282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:59:42.760588  757749 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:59:42.790965  757749 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:59:42.791085  757749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:59:42.861581  757749 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:59:42.852220532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:59:42.861689  757749 docker.go:319] overlay module found
	I0111 08:59:42.864936  757749 out.go:179] * Using the docker driver based on user configuration
	I0111 08:59:42.867895  757749 start.go:309] selected driver: docker
	I0111 08:59:42.867917  757749 start.go:928] validating driver "docker" against <nil>
	I0111 08:59:42.867931  757749 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:59:42.868689  757749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:59:42.919077  757749 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:59:42.910323157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:59:42.919231  757749 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:59:42.919447  757749 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:59:42.922401  757749 out.go:179] * Using Docker driver with root privileges
	I0111 08:59:42.925202  757749 cni.go:84] Creating CNI manager for ""
	I0111 08:59:42.925268  757749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:59:42.925281  757749 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 08:59:42.925365  757749 start.go:353] cluster config:
	{Name:force-systemd-flag-630015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:59:42.928567  757749 out.go:179] * Starting "force-systemd-flag-630015" primary control-plane node in "force-systemd-flag-630015" cluster
	I0111 08:59:42.931559  757749 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 08:59:42.934559  757749 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:59:42.937344  757749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:59:42.937397  757749 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 08:59:42.937410  757749 cache.go:65] Caching tarball of preloaded images
	I0111 08:59:42.937419  757749 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:59:42.937493  757749 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 08:59:42.937502  757749 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 08:59:42.937610  757749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/config.json ...
	I0111 08:59:42.937627  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/config.json: {Name:mk0f6d2032b48bd70b430b3196c0a86321d46383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:42.957103  757749 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:59:42.957122  757749 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:59:42.957143  757749 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:59:42.957177  757749 start.go:360] acquireMachinesLock for force-systemd-flag-630015: {Name:mk67b8ec2d0abace4db1e232ffdec873308880be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:59:42.957297  757749 start.go:364] duration metric: took 103.657µs to acquireMachinesLock for "force-systemd-flag-630015"
	I0111 08:59:42.957334  757749 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-630015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 08:59:42.957396  757749 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:59:42.962712  757749 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:59:42.962962  757749 start.go:159] libmachine.API.Create for "force-systemd-flag-630015" (driver="docker")
	I0111 08:59:42.963000  757749 client.go:173] LocalClient.Create starting
	I0111 08:59:42.963087  757749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 08:59:42.963129  757749 main.go:144] libmachine: Decoding PEM data...
	I0111 08:59:42.963148  757749 main.go:144] libmachine: Parsing certificate...
	I0111 08:59:42.963203  757749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 08:59:42.963227  757749 main.go:144] libmachine: Decoding PEM data...
	I0111 08:59:42.963242  757749 main.go:144] libmachine: Parsing certificate...
	I0111 08:59:42.963605  757749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-630015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:59:42.982107  757749 cli_runner.go:211] docker network inspect force-systemd-flag-630015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:59:42.982245  757749 network_create.go:284] running [docker network inspect force-systemd-flag-630015] to gather additional debugging logs...
	I0111 08:59:42.982269  757749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-630015
	W0111 08:59:42.998168  757749 cli_runner.go:211] docker network inspect force-systemd-flag-630015 returned with exit code 1
	I0111 08:59:42.998198  757749 network_create.go:287] error running [docker network inspect force-systemd-flag-630015]: docker network inspect force-systemd-flag-630015: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-630015 not found
	I0111 08:59:42.998210  757749 network_create.go:289] output of [docker network inspect force-systemd-flag-630015]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-630015 not found
	
	** /stderr **
	I0111 08:59:42.998312  757749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:59:43.016102  757749 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
	I0111 08:59:43.016386  757749 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461c1a9d970d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:7e:25:fe:d0:0d} reservation:<nil>}
	I0111 08:59:43.016676  757749 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38e10816f85 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:42:af:ae:32:ae} reservation:<nil>}
	I0111 08:59:43.017092  757749 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a74e0}
	I0111 08:59:43.017113  757749 network_create.go:124] attempt to create docker network force-systemd-flag-630015 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0111 08:59:43.017177  757749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-630015 force-systemd-flag-630015
	I0111 08:59:43.083963  757749 network_create.go:108] docker network force-systemd-flag-630015 192.168.76.0/24 created
	I0111 08:59:43.084000  757749 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-630015" container
	I0111 08:59:43.084074  757749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:59:43.099784  757749 cli_runner.go:164] Run: docker volume create force-systemd-flag-630015 --label name.minikube.sigs.k8s.io=force-systemd-flag-630015 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:59:43.117375  757749 oci.go:103] Successfully created a docker volume force-systemd-flag-630015
	I0111 08:59:43.117474  757749 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-630015-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-630015 --entrypoint /usr/bin/test -v force-systemd-flag-630015:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:59:43.653740  757749 oci.go:107] Successfully prepared a docker volume force-systemd-flag-630015
	I0111 08:59:43.653820  757749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:59:43.653835  757749 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:59:43.653909  757749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-630015:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:59:47.671172  757749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-630015:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.01722166s)
	I0111 08:59:47.671207  757749 kic.go:203] duration metric: took 4.017368213s to extract preloaded images to volume ...
	W0111 08:59:47.671363  757749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:59:47.671476  757749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:59:47.759154  757749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-630015 --name force-systemd-flag-630015 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-630015 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-630015 --network force-systemd-flag-630015 --ip 192.168.76.2 --volume force-systemd-flag-630015:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:59:48.042014  757749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-630015 --format={{.State.Running}}
	I0111 08:59:48.064767  757749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-630015 --format={{.State.Status}}
	I0111 08:59:48.086545  757749 cli_runner.go:164] Run: docker exec force-systemd-flag-630015 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:59:48.140325  757749 oci.go:144] the created container "force-systemd-flag-630015" has a running status.
	I0111 08:59:48.140359  757749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa...
	I0111 08:59:48.520390  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:59:48.520492  757749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:59:48.542198  757749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-630015 --format={{.State.Status}}
	I0111 08:59:48.565991  757749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:59:48.566011  757749 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-630015 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:59:48.612330  757749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-630015 --format={{.State.Status}}
	I0111 08:59:48.628891  757749 machine.go:94] provisionDockerMachine start ...
	I0111 08:59:48.628993  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:48.645700  757749 main.go:144] libmachine: Using SSH client type: native
	I0111 08:59:48.646041  757749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I0111 08:59:48.646051  757749 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:59:48.646642  757749 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42388->127.0.0.1:33773: read: connection reset by peer
	I0111 08:59:51.793849  757749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-630015
	
	I0111 08:59:51.793877  757749 ubuntu.go:182] provisioning hostname "force-systemd-flag-630015"
	I0111 08:59:51.793953  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:51.811563  757749 main.go:144] libmachine: Using SSH client type: native
	I0111 08:59:51.811887  757749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I0111 08:59:51.811906  757749 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-630015 && echo "force-systemd-flag-630015" | sudo tee /etc/hostname
	I0111 08:59:51.971825  757749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-630015
	
	I0111 08:59:51.971905  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:51.989588  757749 main.go:144] libmachine: Using SSH client type: native
	I0111 08:59:51.989887  757749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I0111 08:59:51.989903  757749 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-630015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-630015/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-630015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:59:52.138548  757749 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:59:52.138622  757749 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 08:59:52.138668  757749 ubuntu.go:190] setting up certificates
	I0111 08:59:52.138706  757749 provision.go:84] configureAuth start
	I0111 08:59:52.138857  757749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-630015
	I0111 08:59:52.156291  757749 provision.go:143] copyHostCerts
	I0111 08:59:52.156331  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 08:59:52.156362  757749 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 08:59:52.156369  757749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 08:59:52.156446  757749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 08:59:52.156553  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 08:59:52.156571  757749 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 08:59:52.156575  757749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 08:59:52.156601  757749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 08:59:52.156647  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 08:59:52.156663  757749 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 08:59:52.156667  757749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 08:59:52.156690  757749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 08:59:52.156742  757749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-630015 san=[127.0.0.1 192.168.76.2 force-systemd-flag-630015 localhost minikube]
	I0111 08:59:52.313813  757749 provision.go:177] copyRemoteCerts
	I0111 08:59:52.313905  757749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:59:52.313953  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:52.331908  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:52.433923  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:59:52.433991  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0111 08:59:52.451886  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:59:52.451956  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 08:59:52.469576  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:59:52.469641  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:59:52.487904  757749 provision.go:87] duration metric: took 349.153797ms to configureAuth
	I0111 08:59:52.487977  757749 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:59:52.488194  757749 config.go:182] Loaded profile config "force-systemd-flag-630015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:59:52.488340  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:52.505713  757749 main.go:144] libmachine: Using SSH client type: native
	I0111 08:59:52.506048  757749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I0111 08:59:52.506068  757749 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 08:59:52.814443  757749 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 08:59:52.814474  757749 machine.go:97] duration metric: took 4.185563775s to provisionDockerMachine
	I0111 08:59:52.814490  757749 client.go:176] duration metric: took 9.851475453s to LocalClient.Create
	I0111 08:59:52.814505  757749 start.go:167] duration metric: took 9.851544237s to libmachine.API.Create "force-systemd-flag-630015"
	I0111 08:59:52.814526  757749 start.go:293] postStartSetup for "force-systemd-flag-630015" (driver="docker")
	I0111 08:59:52.814541  757749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:59:52.814618  757749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:59:52.814665  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:52.832489  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:52.939674  757749 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:59:52.943756  757749 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:59:52.943791  757749 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:59:52.943803  757749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 08:59:52.943856  757749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 08:59:52.943945  757749 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 08:59:52.943958  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> /etc/ssl/certs/5769072.pem
	I0111 08:59:52.944054  757749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:59:52.952265  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:59:52.972869  757749 start.go:296] duration metric: took 158.324009ms for postStartSetup
	I0111 08:59:52.973257  757749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-630015
	I0111 08:59:52.992259  757749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/config.json ...
	I0111 08:59:52.992555  757749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:59:52.992608  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:53.012475  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:53.115039  757749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:59:53.119651  757749 start.go:128] duration metric: took 10.162240712s to createHost
	I0111 08:59:53.119684  757749 start.go:83] releasing machines lock for "force-systemd-flag-630015", held for 10.162376755s
	I0111 08:59:53.119758  757749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-630015
	I0111 08:59:53.136913  757749 ssh_runner.go:195] Run: cat /version.json
	I0111 08:59:53.136928  757749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:59:53.136967  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:53.136987  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:53.158664  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:53.167698  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:53.357632  757749 ssh_runner.go:195] Run: systemctl --version
	I0111 08:59:53.364331  757749 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 08:59:53.399763  757749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:59:53.404278  757749 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:59:53.404352  757749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:59:53.432690  757749 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:59:53.432764  757749 start.go:496] detecting cgroup driver to use...
	I0111 08:59:53.432793  757749 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:59:53.432900  757749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:59:53.450764  757749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:59:53.464083  757749 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:59:53.464169  757749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:59:53.482589  757749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:59:53.502225  757749 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:59:53.632454  757749 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:59:53.783752  757749 docker.go:234] disabling docker service ...
	I0111 08:59:53.783831  757749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:59:53.803904  757749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:59:53.816845  757749 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:59:53.949800  757749 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:59:54.075042  757749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:59:54.088917  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:59:54.102807  757749 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 08:59:54.102916  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.111777  757749 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 08:59:54.111850  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.121055  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.130181  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.139689  757749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:59:54.147555  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.156607  757749 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.170180  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.179370  757749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:59:54.186830  757749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:59:54.194205  757749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:59:54.320996  757749 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 08:59:54.501839  757749 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 08:59:54.501932  757749 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 08:59:54.505960  757749 start.go:574] Will wait 60s for crictl version
	I0111 08:59:54.506076  757749 ssh_runner.go:195] Run: which crictl
	I0111 08:59:54.509495  757749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:59:54.534254  757749 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 08:59:54.534373  757749 ssh_runner.go:195] Run: crio --version
	I0111 08:59:54.561533  757749 ssh_runner.go:195] Run: crio --version
	I0111 08:59:54.596514  757749 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 08:59:54.599399  757749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-630015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:59:54.615416  757749 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 08:59:54.619390  757749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:59:54.628960  757749 kubeadm.go:884] updating cluster {Name:force-systemd-flag-630015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:59:54.629077  757749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:59:54.629127  757749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:59:54.671757  757749 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:59:54.671787  757749 crio.go:433] Images already preloaded, skipping extraction
	I0111 08:59:54.671847  757749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:59:54.702634  757749 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:59:54.702658  757749 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:59:54.702667  757749 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0111 08:59:54.702759  757749 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-630015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:59:54.702850  757749 ssh_runner.go:195] Run: crio config
	I0111 08:59:54.756410  757749 cni.go:84] Creating CNI manager for ""
	I0111 08:59:54.756434  757749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:59:54.756487  757749 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:59:54.756520  757749 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-630015 NodeName:force-systemd-flag-630015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:59:54.756664  757749 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-630015"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:59:54.756743  757749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:59:54.764408  757749 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:59:54.764507  757749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:59:54.772131  757749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0111 08:59:54.784827  757749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:59:54.798337  757749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0111 08:59:54.811438  757749 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:59:54.815095  757749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:59:54.825510  757749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:59:54.952595  757749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:59:54.969787  757749 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015 for IP: 192.168.76.2
	I0111 08:59:54.969851  757749 certs.go:195] generating shared ca certs ...
	I0111 08:59:54.969884  757749 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:54.970078  757749 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 08:59:54.970180  757749 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 08:59:54.970209  757749 certs.go:257] generating profile certs ...
	I0111 08:59:54.970299  757749 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.key
	I0111 08:59:54.970341  757749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.crt with IP's: []
	I0111 08:59:55.111056  757749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.crt ...
	I0111 08:59:55.111094  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.crt: {Name:mk3447f8010fea84488c5d961de16a6017788675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.111305  757749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.key ...
	I0111 08:59:55.111321  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.key: {Name:mk3810c0261b479f915815c69b7bbb1973a449e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.111424  757749 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key.54eed94c
	I0111 08:59:55.111445  757749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt.54eed94c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0111 08:59:55.391700  757749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt.54eed94c ...
	I0111 08:59:55.391734  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt.54eed94c: {Name:mk0dfd65c00056ee70dc240b7a6870a7253530f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.391927  757749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key.54eed94c ...
	I0111 08:59:55.391941  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key.54eed94c: {Name:mk6dbd290a2c614096c20a27dabbd886954df729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.392035  757749 certs.go:382] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt.54eed94c -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt
	I0111 08:59:55.392111  757749 certs.go:386] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key.54eed94c -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key
	I0111 08:59:55.392172  757749 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key
	I0111 08:59:55.392193  757749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt with IP's: []
	I0111 08:59:55.638377  757749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt ...
	I0111 08:59:55.638409  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt: {Name:mk44fcca6096a57843d8bf5df407d624f081de1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.638601  757749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key ...
	I0111 08:59:55.638616  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key: {Name:mkacef72efa4354d2cd0d689112bb93f5a595040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.638704  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:59:55.638726  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:59:55.638742  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:59:55.638762  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:59:55.638773  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:59:55.638792  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:59:55.638808  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:59:55.638819  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:59:55.638878  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 08:59:55.638921  757749 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 08:59:55.638934  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:59:55.638960  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:59:55.638988  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:59:55.639016  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 08:59:55.639064  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:59:55.639099  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:55.639116  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem -> /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.639127  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> /usr/share/ca-certificates/5769072.pem
	I0111 08:59:55.639717  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:59:55.658896  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 08:59:55.676895  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:59:55.701485  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:59:55.726251  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:59:55.748578  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:59:55.767393  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:59:55.784670  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:59:55.802690  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:59:55.820342  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 08:59:55.838708  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 08:59:55.856927  757749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:59:55.869767  757749 ssh_runner.go:195] Run: openssl version
	I0111 08:59:55.876569  757749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.884166  757749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 08:59:55.891629  757749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.895300  757749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.895368  757749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.937500  757749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:59:55.945135  757749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/576907.pem /etc/ssl/certs/51391683.0
	I0111 08:59:55.952879  757749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 08:59:55.960325  757749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 08:59:55.968063  757749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 08:59:55.972281  757749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 08:59:55.972345  757749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 08:59:56.016108  757749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:59:56.024171  757749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5769072.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:59:56.032118  757749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:56.040182  757749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:59:56.048247  757749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:56.052181  757749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:56.052250  757749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:56.093684  757749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:59:56.101837  757749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:59:56.109693  757749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:59:56.113409  757749 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:59:56.113466  757749 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-630015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:59:56.113553  757749 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:59:56.113617  757749 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:59:56.141097  757749 cri.go:96] found id: ""
	I0111 08:59:56.141175  757749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:59:56.149374  757749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:59:56.157193  757749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:59:56.157293  757749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:59:56.166142  757749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:59:56.166168  757749 kubeadm.go:158] found existing configuration files:
	
	I0111 08:59:56.166229  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:59:56.175404  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:59:56.175522  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:59:56.183174  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:59:56.190971  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:59:56.191088  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:59:56.198545  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:59:56.206394  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:59:56.206470  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:59:56.213915  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:59:56.221725  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:59:56.221795  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:59:56.229122  757749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:59:56.266737  757749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:59:56.266800  757749 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:59:56.342554  757749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:59:56.342634  757749 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:59:56.342674  757749 kubeadm.go:319] OS: Linux
	I0111 08:59:56.342724  757749 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:59:56.342777  757749 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:59:56.342828  757749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:59:56.342880  757749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:59:56.342931  757749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:59:56.342984  757749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:59:56.343033  757749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:59:56.343092  757749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:59:56.343141  757749 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:59:56.410410  757749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:59:56.410535  757749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:59:56.410633  757749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:59:56.418629  757749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:59:56.425154  757749 out.go:252]   - Generating certificates and keys ...
	I0111 08:59:56.425245  757749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:59:56.425317  757749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:59:57.057632  757749 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:59:57.566376  757749 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:59:57.802598  757749 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:59:57.922207  757749 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:59:57.989463  757749 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:59:57.989777  757749 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:59:58.119139  757749 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:59:58.119532  757749 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:59:58.190162  757749 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:59:58.411963  757749 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:59:58.636874  757749 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:59:58.637165  757749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:59:58.856965  757749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:59:59.269048  757749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:59:59.579868  757749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:59:59.746731  757749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:59:59.947813  757749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:59:59.948493  757749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:59:59.951202  757749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:59:59.954908  757749 out.go:252]   - Booting up control plane ...
	I0111 08:59:59.955011  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:59:59.955089  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:59:59.955158  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:59:59.970035  757749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:59:59.970252  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:59:59.979482  757749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:59:59.979596  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:59:59.979655  757749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:00:00.628099  757749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 09:00:00.628227  757749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 09:04:00.598421  757749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000473872s
	I0111 09:04:00.598459  757749 kubeadm.go:319] 
	I0111 09:04:00.598526  757749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 09:04:00.598567  757749 kubeadm.go:319] 	- The kubelet is not running
	I0111 09:04:00.598685  757749 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 09:04:00.598696  757749 kubeadm.go:319] 
	I0111 09:04:00.598811  757749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 09:04:00.598848  757749 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 09:04:00.598889  757749 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 09:04:00.598899  757749 kubeadm.go:319] 
	I0111 09:04:00.609837  757749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:04:00.610361  757749 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:04:00.610477  757749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:04:00.610770  757749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 09:04:00.610789  757749 kubeadm.go:319] 
	I0111 09:04:00.610865  757749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0111 09:04:00.611020  757749 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000473872s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000473872s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 09:04:00.611112  757749 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0111 09:04:01.030167  757749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:04:01.043832  757749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 09:04:01.043904  757749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 09:04:01.052245  757749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 09:04:01.052265  757749 kubeadm.go:158] found existing configuration files:
	
	I0111 09:04:01.052317  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 09:04:01.060474  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 09:04:01.060546  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 09:04:01.068369  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 09:04:01.076442  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 09:04:01.076507  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 09:04:01.084958  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 09:04:01.093001  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 09:04:01.093111  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 09:04:01.104919  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 09:04:01.116271  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 09:04:01.116345  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 09:04:01.125437  757749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 09:04:01.180127  757749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 09:04:01.180214  757749 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 09:04:01.263691  757749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 09:04:01.263771  757749 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 09:04:01.263812  757749 kubeadm.go:319] OS: Linux
	I0111 09:04:01.263863  757749 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 09:04:01.263922  757749 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 09:04:01.263981  757749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 09:04:01.264035  757749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 09:04:01.264089  757749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 09:04:01.264142  757749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 09:04:01.264192  757749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 09:04:01.264249  757749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 09:04:01.264301  757749 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 09:04:01.332134  757749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 09:04:01.332257  757749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 09:04:01.332354  757749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 09:04:01.340058  757749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 09:04:01.345221  757749 out.go:252]   - Generating certificates and keys ...
	I0111 09:04:01.345313  757749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 09:04:01.345383  757749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 09:04:01.345459  757749 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 09:04:01.345520  757749 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 09:04:01.345591  757749 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 09:04:01.345645  757749 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 09:04:01.345707  757749 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 09:04:01.345769  757749 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 09:04:01.346280  757749 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 09:04:01.346736  757749 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 09:04:01.347161  757749 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 09:04:01.347255  757749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 09:04:02.153749  757749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 09:04:02.549592  757749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 09:04:02.718485  757749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 09:04:03.108587  757749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 09:04:03.292500  757749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 09:04:03.293149  757749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 09:04:03.295747  757749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 09:04:03.298929  757749 out.go:252]   - Booting up control plane ...
	I0111 09:04:03.299039  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 09:04:03.299122  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 09:04:03.300853  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 09:04:03.316699  757749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 09:04:03.316810  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 09:04:03.325052  757749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 09:04:03.325437  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 09:04:03.325591  757749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:04:03.470373  757749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 09:04:03.470503  757749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 09:08:03.470270  757749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000128716s
	I0111 09:08:03.470648  757749 kubeadm.go:319] 
	I0111 09:08:03.470766  757749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 09:08:03.470824  757749 kubeadm.go:319] 	- The kubelet is not running
	I0111 09:08:03.471138  757749 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 09:08:03.471147  757749 kubeadm.go:319] 
	I0111 09:08:03.471326  757749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 09:08:03.471395  757749 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 09:08:03.471646  757749 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 09:08:03.471655  757749 kubeadm.go:319] 
	I0111 09:08:03.482608  757749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:08:03.483183  757749 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:08:03.483345  757749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:08:03.483671  757749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0111 09:08:03.483715  757749 kubeadm.go:319] 
	I0111 09:08:03.483840  757749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 09:08:03.483920  757749 kubeadm.go:403] duration metric: took 8m7.370458715s to StartCluster
	I0111 09:08:03.483996  757749 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0111 09:08:03.484109  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 09:08:03.516164  757749 cri.go:96] found id: ""
	I0111 09:08:03.516254  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.516277  757749 logs.go:284] No container was found matching "kube-apiserver"
	I0111 09:08:03.516312  757749 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0111 09:08:03.516408  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 09:08:03.545775  757749 cri.go:96] found id: ""
	I0111 09:08:03.545857  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.545881  757749 logs.go:284] No container was found matching "etcd"
	I0111 09:08:03.545915  757749 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0111 09:08:03.546011  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 09:08:03.575175  757749 cri.go:96] found id: ""
	I0111 09:08:03.575200  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.575209  757749 logs.go:284] No container was found matching "coredns"
	I0111 09:08:03.575215  757749 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0111 09:08:03.575305  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 09:08:03.601709  757749 cri.go:96] found id: ""
	I0111 09:08:03.601739  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.601748  757749 logs.go:284] No container was found matching "kube-scheduler"
	I0111 09:08:03.601755  757749 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0111 09:08:03.601813  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 09:08:03.629040  757749 cri.go:96] found id: ""
	I0111 09:08:03.629068  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.629076  757749 logs.go:284] No container was found matching "kube-proxy"
	I0111 09:08:03.629083  757749 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 09:08:03.629148  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 09:08:03.656996  757749 cri.go:96] found id: ""
	I0111 09:08:03.657025  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.657034  757749 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 09:08:03.657041  757749 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0111 09:08:03.657103  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 09:08:03.700703  757749 cri.go:96] found id: ""
	I0111 09:08:03.700777  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.700798  757749 logs.go:284] No container was found matching "kindnet"
	I0111 09:08:03.700821  757749 logs.go:123] Gathering logs for kubelet ...
	I0111 09:08:03.700859  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 09:08:03.778858  757749 logs.go:123] Gathering logs for dmesg ...
	I0111 09:08:03.778899  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 09:08:03.798167  757749 logs.go:123] Gathering logs for describe nodes ...
	I0111 09:08:03.798197  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 09:08:03.923385  757749 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 09:08:03.914581    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.915336    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.917042    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.917378    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.918777    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 09:08:03.914581    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.915336    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.917042    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.917378    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.918777    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 09:08:03.923408  757749 logs.go:123] Gathering logs for CRI-O ...
	I0111 09:08:03.923423  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0111 09:08:03.960061  757749 logs.go:123] Gathering logs for container status ...
	I0111 09:08:03.960095  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 09:08:03.989973  757749 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128716s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 09:08:03.990062  757749 out.go:285] * 
	* 
	W0111 09:08:03.990262  757749 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128716s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128716s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 09:08:03.990288  757749 out.go:285] * 
	* 
	W0111 09:08:03.990561  757749 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 09:08:03.997011  757749 out.go:203] 
	W0111 09:08:04.000016  757749 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128716s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128716s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 09:08:04.000073  757749 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 09:08:04.000094  757749 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 09:08:04.004046  757749 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-630015 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-11 09:08:04.427101728 +0000 UTC m=+3269.212931932
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-630015
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-630015:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "482299d8a7c6ab8e1452a5095ece56849e05eb31ea5a504c8673508e3516e916",
	        "Created": "2026-01-11T08:59:47.77404633Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 758177,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T08:59:47.841192494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/482299d8a7c6ab8e1452a5095ece56849e05eb31ea5a504c8673508e3516e916/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/482299d8a7c6ab8e1452a5095ece56849e05eb31ea5a504c8673508e3516e916/hostname",
	        "HostsPath": "/var/lib/docker/containers/482299d8a7c6ab8e1452a5095ece56849e05eb31ea5a504c8673508e3516e916/hosts",
	        "LogPath": "/var/lib/docker/containers/482299d8a7c6ab8e1452a5095ece56849e05eb31ea5a504c8673508e3516e916/482299d8a7c6ab8e1452a5095ece56849e05eb31ea5a504c8673508e3516e916-json.log",
	        "Name": "/force-systemd-flag-630015",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-630015:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-630015",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "482299d8a7c6ab8e1452a5095ece56849e05eb31ea5a504c8673508e3516e916",
	                "LowerDir": "/var/lib/docker/overlay2/63b52be5bec6044e3437399957e7c1c24019f26de91cccec1f688f546ef5d176-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63b52be5bec6044e3437399957e7c1c24019f26de91cccec1f688f546ef5d176/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63b52be5bec6044e3437399957e7c1c24019f26de91cccec1f688f546ef5d176/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63b52be5bec6044e3437399957e7c1c24019f26de91cccec1f688f546ef5d176/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-630015",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-630015/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-630015",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-630015",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-630015",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10a840c2d318c4968a252021eb30bc9ad5bd67a352dd54f206c13e52addac315",
	            "SandboxKey": "/var/run/docker/netns/10a840c2d318",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33773"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33774"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33777"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33775"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33776"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-630015": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:13:8a:bb:c5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6ac2cdd04afb9525a00ba017891a60194fd6ec3027b3a1ce79e08168801fded1",
	                    "EndpointID": "1fc756c9f12e8ed66771ac31ffa03033587b604a0458a952f631f0913489bbf0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-630015",
	                        "482299d8a7c6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-630015 -n force-systemd-flag-630015
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-630015 -n force-systemd-flag-630015: exit status 6 (330.708033ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 09:08:04.759928  784648 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-630015" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-630015 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-env-472282                                                                                                                                                                                                                   │ force-systemd-env-472282  │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:01 UTC │
	│ start   │ -p cert-options-459267 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ cert-options-459267 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ -p cert-options-459267 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ delete  │ -p cert-options-459267                                                                                                                                                                                                                        │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │                     │
	│ stop    │ -p old-k8s-version-931581 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-931581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:04 UTC │
	│ image   │ old-k8s-version-931581 image list --format=json                                                                                                                                                                                               │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ pause   │ -p old-k8s-version-931581 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │                     │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │                     │
	│ stop    │ -p no-preload-236664 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │ 11 Jan 26 09:06 UTC │
	│ addons  │ enable dashboard -p no-preload-236664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ image   │ no-preload-236664 image list --format=json                                                                                                                                                                                                    │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ pause   │ -p no-preload-236664 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626        │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	│ ssh     │ force-systemd-flag-630015 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-630015 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:07:20
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:07:20.146628  781733 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:07:20.146848  781733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:07:20.146876  781733 out.go:374] Setting ErrFile to fd 2...
	I0111 09:07:20.146900  781733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:07:20.147216  781733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:07:20.147716  781733 out.go:368] Setting JSON to false
	I0111 09:07:20.148616  781733 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13790,"bootTime":1768108650,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:07:20.148767  781733 start.go:143] virtualization:  
	I0111 09:07:20.152655  781733 out.go:179] * [embed-certs-630626] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:07:20.157073  781733 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:07:20.157120  781733 notify.go:221] Checking for updates...
	I0111 09:07:20.160476  781733 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:07:20.163610  781733 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:07:20.166694  781733 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:07:20.169756  781733 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:07:20.172780  781733 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:07:20.176480  781733 config.go:182] Loaded profile config "force-systemd-flag-630015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:07:20.176634  781733 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:07:20.199664  781733 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:07:20.199783  781733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:07:20.262721  781733 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:07:20.252862513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:07:20.262829  781733 docker.go:319] overlay module found
	I0111 09:07:20.266096  781733 out.go:179] * Using the docker driver based on user configuration
	I0111 09:07:20.269097  781733 start.go:309] selected driver: docker
	I0111 09:07:20.269115  781733 start.go:928] validating driver "docker" against <nil>
	I0111 09:07:20.269149  781733 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:07:20.269902  781733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:07:20.323436  781733 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:07:20.313781911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:07:20.323602  781733 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 09:07:20.323828  781733 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:07:20.326808  781733 out.go:179] * Using Docker driver with root privileges
	I0111 09:07:20.329850  781733 cni.go:84] Creating CNI manager for ""
	I0111 09:07:20.329930  781733 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:07:20.329943  781733 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 09:07:20.330027  781733 start.go:353] cluster config:
	{Name:embed-certs-630626 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:07:20.335162  781733 out.go:179] * Starting "embed-certs-630626" primary control-plane node in "embed-certs-630626" cluster
	I0111 09:07:20.338085  781733 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:07:20.341115  781733 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:07:20.344048  781733 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:07:20.344122  781733 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:07:20.344155  781733 cache.go:65] Caching tarball of preloaded images
	I0111 09:07:20.344128  781733 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:07:20.344279  781733 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:07:20.344291  781733 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 09:07:20.344401  781733 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/config.json ...
	I0111 09:07:20.344419  781733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/config.json: {Name:mkd93abb84a2c19b5e01dad1f406c977f8bbf0a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:20.363762  781733 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:07:20.363783  781733 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:07:20.363804  781733 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:07:20.363834  781733 start.go:360] acquireMachinesLock for embed-certs-630626: {Name:mkd95b5b6f25655182ae68d0dfec1c5695a6e23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:07:20.363955  781733 start.go:364] duration metric: took 100.285µs to acquireMachinesLock for "embed-certs-630626"
	I0111 09:07:20.363993  781733 start.go:93] Provisioning new machine with config: &{Name:embed-certs-630626 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:07:20.364064  781733 start.go:125] createHost starting for "" (driver="docker")
	I0111 09:07:20.369374  781733 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 09:07:20.369618  781733 start.go:159] libmachine.API.Create for "embed-certs-630626" (driver="docker")
	I0111 09:07:20.369655  781733 client.go:173] LocalClient.Create starting
	I0111 09:07:20.369729  781733 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 09:07:20.369775  781733 main.go:144] libmachine: Decoding PEM data...
	I0111 09:07:20.369796  781733 main.go:144] libmachine: Parsing certificate...
	I0111 09:07:20.369846  781733 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 09:07:20.369872  781733 main.go:144] libmachine: Decoding PEM data...
	I0111 09:07:20.369887  781733 main.go:144] libmachine: Parsing certificate...
	I0111 09:07:20.370290  781733 cli_runner.go:164] Run: docker network inspect embed-certs-630626 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 09:07:20.385876  781733 cli_runner.go:211] docker network inspect embed-certs-630626 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 09:07:20.385961  781733 network_create.go:284] running [docker network inspect embed-certs-630626] to gather additional debugging logs...
	I0111 09:07:20.385981  781733 cli_runner.go:164] Run: docker network inspect embed-certs-630626
	W0111 09:07:20.402709  781733 cli_runner.go:211] docker network inspect embed-certs-630626 returned with exit code 1
	I0111 09:07:20.402740  781733 network_create.go:287] error running [docker network inspect embed-certs-630626]: docker network inspect embed-certs-630626: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-630626 not found
	I0111 09:07:20.402753  781733 network_create.go:289] output of [docker network inspect embed-certs-630626]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-630626 not found
	
	** /stderr **
	I0111 09:07:20.402865  781733 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:07:20.420094  781733 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
	I0111 09:07:20.420441  781733 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461c1a9d970d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:7e:25:fe:d0:0d} reservation:<nil>}
	I0111 09:07:20.420779  781733 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38e10816f85 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:42:af:ae:32:ae} reservation:<nil>}
	I0111 09:07:20.421040  781733 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6ac2cdd04afb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:0e:43:8e:04:e3} reservation:<nil>}
	I0111 09:07:20.421469  781733 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a35130}
	I0111 09:07:20.421493  781733 network_create.go:124] attempt to create docker network embed-certs-630626 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 09:07:20.421551  781733 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-630626 embed-certs-630626
	I0111 09:07:20.480643  781733 network_create.go:108] docker network embed-certs-630626 192.168.85.0/24 created
	I0111 09:07:20.480679  781733 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-630626" container
	I0111 09:07:20.480755  781733 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 09:07:20.497308  781733 cli_runner.go:164] Run: docker volume create embed-certs-630626 --label name.minikube.sigs.k8s.io=embed-certs-630626 --label created_by.minikube.sigs.k8s.io=true
	I0111 09:07:20.515662  781733 oci.go:103] Successfully created a docker volume embed-certs-630626
	I0111 09:07:20.515748  781733 cli_runner.go:164] Run: docker run --rm --name embed-certs-630626-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-630626 --entrypoint /usr/bin/test -v embed-certs-630626:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 09:07:21.042675  781733 oci.go:107] Successfully prepared a docker volume embed-certs-630626
	I0111 09:07:21.042742  781733 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:07:21.042752  781733 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 09:07:21.042821  781733 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-630626:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 09:07:25.078395  781733 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-630626:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.035527784s)
	I0111 09:07:25.078430  781733 kic.go:203] duration metric: took 4.03567364s to extract preloaded images to volume ...
	W0111 09:07:25.078589  781733 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 09:07:25.078716  781733 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 09:07:25.146043  781733 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-630626 --name embed-certs-630626 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-630626 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-630626 --network embed-certs-630626 --ip 192.168.85.2 --volume embed-certs-630626:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 09:07:25.460377  781733 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Running}}
	I0111 09:07:25.480641  781733 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:07:25.502072  781733 cli_runner.go:164] Run: docker exec embed-certs-630626 stat /var/lib/dpkg/alternatives/iptables
	I0111 09:07:25.556826  781733 oci.go:144] the created container "embed-certs-630626" has a running status.
	I0111 09:07:25.556854  781733 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa...
	I0111 09:07:25.768696  781733 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 09:07:25.793686  781733 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:07:25.827925  781733 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 09:07:25.827950  781733 kic_runner.go:114] Args: [docker exec --privileged embed-certs-630626 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 09:07:25.893183  781733 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:07:25.912482  781733 machine.go:94] provisionDockerMachine start ...
	I0111 09:07:25.912576  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:25.937344  781733 main.go:144] libmachine: Using SSH client type: native
	I0111 09:07:25.937695  781733 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I0111 09:07:25.937711  781733 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:07:25.938713  781733 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0111 09:07:29.089852  781733 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-630626
	
	I0111 09:07:29.089880  781733 ubuntu.go:182] provisioning hostname "embed-certs-630626"
	I0111 09:07:29.089964  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:29.108852  781733 main.go:144] libmachine: Using SSH client type: native
	I0111 09:07:29.109163  781733 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I0111 09:07:29.109179  781733 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-630626 && echo "embed-certs-630626" | sudo tee /etc/hostname
	I0111 09:07:29.267000  781733 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-630626
	
	I0111 09:07:29.267095  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:29.284595  781733 main.go:144] libmachine: Using SSH client type: native
	I0111 09:07:29.284911  781733 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I0111 09:07:29.284933  781733 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-630626' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-630626/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-630626' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:07:29.430567  781733 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:07:29.430598  781733 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:07:29.430623  781733 ubuntu.go:190] setting up certificates
	I0111 09:07:29.430631  781733 provision.go:84] configureAuth start
	I0111 09:07:29.430693  781733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-630626
	I0111 09:07:29.449128  781733 provision.go:143] copyHostCerts
	I0111 09:07:29.449201  781733 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:07:29.449215  781733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:07:29.449296  781733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:07:29.449399  781733 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:07:29.449410  781733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:07:29.449438  781733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:07:29.449505  781733 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:07:29.449522  781733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:07:29.449567  781733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:07:29.449626  781733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.embed-certs-630626 san=[127.0.0.1 192.168.85.2 embed-certs-630626 localhost minikube]
	I0111 09:07:29.897870  781733 provision.go:177] copyRemoteCerts
	I0111 09:07:29.897946  781733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:07:29.897991  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:29.916075  781733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:07:30.042723  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 09:07:30.083682  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 09:07:30.111225  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:07:30.133972  781733 provision.go:87] duration metric: took 703.316123ms to configureAuth
	I0111 09:07:30.134002  781733 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:07:30.134242  781733 config.go:182] Loaded profile config "embed-certs-630626": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:07:30.134354  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:30.153964  781733 main.go:144] libmachine: Using SSH client type: native
	I0111 09:07:30.154328  781733 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I0111 09:07:30.154353  781733 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:07:30.454732  781733 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:07:30.454757  781733 machine.go:97] duration metric: took 4.542250183s to provisionDockerMachine
	I0111 09:07:30.454767  781733 client.go:176] duration metric: took 10.085100595s to LocalClient.Create
	I0111 09:07:30.454781  781733 start.go:167] duration metric: took 10.085164325s to libmachine.API.Create "embed-certs-630626"
	I0111 09:07:30.454789  781733 start.go:293] postStartSetup for "embed-certs-630626" (driver="docker")
	I0111 09:07:30.454800  781733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:07:30.454885  781733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:07:30.454932  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:30.475578  781733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:07:30.582401  781733 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:07:30.585775  781733 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:07:30.585802  781733 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:07:30.585813  781733 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:07:30.585867  781733 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:07:30.585948  781733 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:07:30.586052  781733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:07:30.593629  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:07:30.612039  781733 start.go:296] duration metric: took 157.234926ms for postStartSetup
	I0111 09:07:30.612405  781733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-630626
	I0111 09:07:30.629859  781733 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/config.json ...
	I0111 09:07:30.630181  781733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:07:30.630224  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:30.646892  781733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:07:30.747713  781733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:07:30.759487  781733 start.go:128] duration metric: took 10.395408032s to createHost
	I0111 09:07:30.759518  781733 start.go:83] releasing machines lock for "embed-certs-630626", held for 10.395549318s
	I0111 09:07:30.759595  781733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-630626
	I0111 09:07:30.776217  781733 ssh_runner.go:195] Run: cat /version.json
	I0111 09:07:30.776269  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:30.776304  781733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:07:30.776367  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:30.804691  781733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:07:30.811947  781733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:07:30.906019  781733 ssh_runner.go:195] Run: systemctl --version
	I0111 09:07:31.011844  781733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:07:31.048704  781733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:07:31.053306  781733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:07:31.053406  781733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:07:31.083673  781733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 09:07:31.083706  781733 start.go:496] detecting cgroup driver to use...
	I0111 09:07:31.083742  781733 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:07:31.083814  781733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:07:31.103107  781733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:07:31.116520  781733 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:07:31.116609  781733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:07:31.134226  781733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:07:31.154946  781733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:07:31.305112  781733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:07:31.432514  781733 docker.go:234] disabling docker service ...
	I0111 09:07:31.432619  781733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:07:31.456207  781733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:07:31.469663  781733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:07:31.595192  781733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:07:31.714679  781733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:07:31.727792  781733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:07:31.741838  781733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 09:07:31.741946  781733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:07:31.751747  781733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:07:31.751851  781733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:07:31.760754  781733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:07:31.769264  781733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:07:31.778023  781733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:07:31.786144  781733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:07:31.794903  781733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:07:31.808909  781733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:07:31.817728  781733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:07:31.825506  781733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:07:31.833456  781733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:07:31.947417  781733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:07:32.121677  781733 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:07:32.121755  781733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:07:32.125572  781733 start.go:574] Will wait 60s for crictl version
	I0111 09:07:32.125641  781733 ssh_runner.go:195] Run: which crictl
	I0111 09:07:32.129174  781733 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:07:32.156927  781733 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:07:32.157072  781733 ssh_runner.go:195] Run: crio --version
	I0111 09:07:32.185368  781733 ssh_runner.go:195] Run: crio --version
	I0111 09:07:32.214662  781733 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 09:07:32.217445  781733 cli_runner.go:164] Run: docker network inspect embed-certs-630626 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:07:32.233555  781733 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:07:32.237215  781733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:07:32.247024  781733 kubeadm.go:884] updating cluster {Name:embed-certs-630626 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:07:32.247143  781733 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:07:32.247203  781733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:07:32.288099  781733 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:07:32.288121  781733 crio.go:433] Images already preloaded, skipping extraction
	I0111 09:07:32.288175  781733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:07:32.313677  781733 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:07:32.313699  781733 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:07:32.313708  781733 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 09:07:32.313797  781733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-630626 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:07:32.313882  781733 ssh_runner.go:195] Run: crio config
	I0111 09:07:32.383936  781733 cni.go:84] Creating CNI manager for ""
	I0111 09:07:32.383962  781733 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:07:32.383984  781733 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:07:32.384009  781733 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-630626 NodeName:embed-certs-630626 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:07:32.384136  781733 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-630626"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:07:32.384210  781733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 09:07:32.391536  781733 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:07:32.391624  781733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:07:32.398707  781733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0111 09:07:32.411272  781733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:07:32.424990  781733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0111 09:07:32.437946  781733 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:07:32.441555  781733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:07:32.451271  781733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:07:32.575015  781733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:07:32.590541  781733 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626 for IP: 192.168.85.2
	I0111 09:07:32.590621  781733 certs.go:195] generating shared ca certs ...
	I0111 09:07:32.590657  781733 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:32.590832  781733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:07:32.590912  781733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:07:32.590946  781733 certs.go:257] generating profile certs ...
	I0111 09:07:32.591041  781733 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/client.key
	I0111 09:07:32.591080  781733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/client.crt with IP's: []
	I0111 09:07:32.723395  781733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/client.crt ...
	I0111 09:07:32.723469  781733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/client.crt: {Name:mk7e8a45ae7b177daf475bb1c9d064942f55b15a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:32.723710  781733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/client.key ...
	I0111 09:07:32.723788  781733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/client.key: {Name:mk931f0ab879fabadbe0b16a5dc8f686fc2ff068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:32.723923  781733 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.key.d6bdd2b3
	I0111 09:07:32.723964  781733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.crt.d6bdd2b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 09:07:32.872982  781733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.crt.d6bdd2b3 ...
	I0111 09:07:32.873017  781733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.crt.d6bdd2b3: {Name:mk21469ff15148d664e0fb88353a992a81274abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:32.873209  781733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.key.d6bdd2b3 ...
	I0111 09:07:32.873228  781733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.key.d6bdd2b3: {Name:mk8f2910e70e82c69a434aa98404a662dfa6af3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:32.873308  781733 certs.go:382] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.crt.d6bdd2b3 -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.crt
	I0111 09:07:32.873393  781733 certs.go:386] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.key.d6bdd2b3 -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.key
	I0111 09:07:32.873453  781733 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.key
	I0111 09:07:32.873472  781733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.crt with IP's: []
	I0111 09:07:33.132853  781733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.crt ...
	I0111 09:07:33.132888  781733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.crt: {Name:mk85f93806f2d73361080f824701aec3e7217ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:33.133076  781733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.key ...
	I0111 09:07:33.133089  781733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.key: {Name:mk1d971cd88784773729d35b333f8502c795e0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:33.133287  781733 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:07:33.133333  781733 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:07:33.133350  781733 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:07:33.133377  781733 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:07:33.133408  781733 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:07:33.133436  781733 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:07:33.133489  781733 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:07:33.134100  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:07:33.152496  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:07:33.171263  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:07:33.189653  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:07:33.207988  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0111 09:07:33.225775  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 09:07:33.243731  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:07:33.261544  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 09:07:33.279096  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:07:33.296919  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:07:33.314340  781733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:07:33.332573  781733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:07:33.345299  781733 ssh_runner.go:195] Run: openssl version
	I0111 09:07:33.351941  781733 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:07:33.359544  781733 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:07:33.367013  781733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:07:33.370686  781733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:07:33.370761  781733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:07:33.416963  781733 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:07:33.424369  781733 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5769072.pem /etc/ssl/certs/3ec20f2e.0
	I0111 09:07:33.446342  781733 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:07:33.463343  781733 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:07:33.485052  781733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:07:33.491979  781733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:07:33.492051  781733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:07:33.543638  781733 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:07:33.551456  781733 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 09:07:33.558977  781733 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:07:33.566550  781733 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:07:33.574167  781733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:07:33.578029  781733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:07:33.578096  781733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:07:33.619527  781733 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:07:33.627309  781733 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/576907.pem /etc/ssl/certs/51391683.0
	I0111 09:07:33.634898  781733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:07:33.638654  781733 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 09:07:33.638704  781733 kubeadm.go:401] StartCluster: {Name:embed-certs-630626 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:07:33.638776  781733 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:07:33.638842  781733 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:07:33.666280  781733 cri.go:96] found id: ""
	I0111 09:07:33.666348  781733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:07:33.674209  781733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 09:07:33.682115  781733 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 09:07:33.682295  781733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 09:07:33.690076  781733 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 09:07:33.690097  781733 kubeadm.go:158] found existing configuration files:
	
	I0111 09:07:33.690221  781733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 09:07:33.697691  781733 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 09:07:33.697812  781733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 09:07:33.705208  781733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 09:07:33.712723  781733 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 09:07:33.712838  781733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 09:07:33.720491  781733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 09:07:33.728255  781733 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 09:07:33.728337  781733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 09:07:33.735531  781733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 09:07:33.743153  781733 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 09:07:33.743264  781733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 09:07:33.750795  781733 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 09:07:33.787644  781733 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 09:07:33.787998  781733 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 09:07:33.858624  781733 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 09:07:33.858710  781733 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 09:07:33.858759  781733 kubeadm.go:319] OS: Linux
	I0111 09:07:33.858810  781733 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 09:07:33.858870  781733 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 09:07:33.858929  781733 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 09:07:33.858991  781733 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 09:07:33.859051  781733 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 09:07:33.859105  781733 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 09:07:33.859171  781733 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 09:07:33.859231  781733 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 09:07:33.859289  781733 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 09:07:33.928822  781733 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 09:07:33.928940  781733 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 09:07:33.929039  781733 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 09:07:33.938653  781733 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 09:07:33.945131  781733 out.go:252]   - Generating certificates and keys ...
	I0111 09:07:33.945241  781733 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 09:07:33.945340  781733 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 09:07:34.209779  781733 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 09:07:34.442092  781733 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 09:07:34.718864  781733 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 09:07:34.857223  781733 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 09:07:35.121541  781733 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 09:07:35.121752  781733 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-630626 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:07:35.540393  781733 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 09:07:35.540789  781733 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-630626 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:07:35.671262  781733 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 09:07:36.498780  781733 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 09:07:37.162275  781733 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 09:07:37.162500  781733 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 09:07:37.564012  781733 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 09:07:37.775560  781733 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 09:07:37.914199  781733 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 09:07:38.548454  781733 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 09:07:39.271996  781733 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 09:07:39.272749  781733 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 09:07:39.275572  781733 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 09:07:39.279112  781733 out.go:252]   - Booting up control plane ...
	I0111 09:07:39.279220  781733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 09:07:39.279299  781733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 09:07:39.279961  781733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 09:07:39.296146  781733 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 09:07:39.296516  781733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 09:07:39.305273  781733 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 09:07:39.305823  781733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 09:07:39.306060  781733 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:07:39.436895  781733 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 09:07:39.437016  781733 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 09:07:40.938515  781733 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501227324s
	I0111 09:07:40.941533  781733 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 09:07:40.941721  781733 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0111 09:07:40.941819  781733 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 09:07:40.942430  781733 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 09:07:41.962732  781733 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.020102698s
	I0111 09:07:44.035819  781733 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.092870192s
	I0111 09:07:45.944158  781733 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002278537s
	I0111 09:07:46.004742  781733 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 09:07:46.045116  781733 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 09:07:46.063856  781733 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 09:07:46.064368  781733 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-630626 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 09:07:46.077725  781733 kubeadm.go:319] [bootstrap-token] Using token: 0di7k2.p778vu4mdt052ocr
	I0111 09:07:46.080931  781733 out.go:252]   - Configuring RBAC rules ...
	I0111 09:07:46.081148  781733 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 09:07:46.090504  781733 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 09:07:46.111428  781733 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 09:07:46.117770  781733 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 09:07:46.122903  781733 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 09:07:46.130722  781733 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 09:07:46.353933  781733 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 09:07:46.790916  781733 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 09:07:47.351991  781733 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 09:07:47.353495  781733 kubeadm.go:319] 
	I0111 09:07:47.353571  781733 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 09:07:47.353584  781733 kubeadm.go:319] 
	I0111 09:07:47.353661  781733 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 09:07:47.353675  781733 kubeadm.go:319] 
	I0111 09:07:47.353701  781733 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 09:07:47.353781  781733 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 09:07:47.353839  781733 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 09:07:47.353848  781733 kubeadm.go:319] 
	I0111 09:07:47.353903  781733 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 09:07:47.353911  781733 kubeadm.go:319] 
	I0111 09:07:47.353959  781733 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 09:07:47.353967  781733 kubeadm.go:319] 
	I0111 09:07:47.354019  781733 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 09:07:47.354097  781733 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 09:07:47.354190  781733 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 09:07:47.354200  781733 kubeadm.go:319] 
	I0111 09:07:47.354284  781733 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 09:07:47.354365  781733 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 09:07:47.354372  781733 kubeadm.go:319] 
	I0111 09:07:47.354457  781733 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0di7k2.p778vu4mdt052ocr \
	I0111 09:07:47.354569  781733 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb \
	I0111 09:07:47.354592  781733 kubeadm.go:319] 	--control-plane 
	I0111 09:07:47.354600  781733 kubeadm.go:319] 
	I0111 09:07:47.354684  781733 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 09:07:47.354692  781733 kubeadm.go:319] 
	I0111 09:07:47.354774  781733 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0di7k2.p778vu4mdt052ocr \
	I0111 09:07:47.354879  781733 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb 
	I0111 09:07:47.358779  781733 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:07:47.359207  781733 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:07:47.359325  781733 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:07:47.359346  781733 cni.go:84] Creating CNI manager for ""
	I0111 09:07:47.359359  781733 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:07:47.362609  781733 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 09:07:47.365411  781733 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 09:07:47.369463  781733 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 09:07:47.369484  781733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 09:07:47.382003  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 09:07:47.656737  781733 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 09:07:47.656885  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:47.656971  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-630626 minikube.k8s.io/updated_at=2026_01_11T09_07_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=embed-certs-630626 minikube.k8s.io/primary=true
	I0111 09:07:47.937465  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:47.937521  781733 ops.go:34] apiserver oom_adj: -16
	I0111 09:07:48.437954  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:48.938012  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:49.437628  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:49.937573  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:50.438240  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:50.937677  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:51.438451  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:51.938329  781733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:07:52.074045  781733 kubeadm.go:1114] duration metric: took 4.417228015s to wait for elevateKubeSystemPrivileges
	I0111 09:07:52.074095  781733 kubeadm.go:403] duration metric: took 18.435376434s to StartCluster
	I0111 09:07:52.074113  781733 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:52.074211  781733 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:07:52.075258  781733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:07:52.075497  781733 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:07:52.075632  781733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 09:07:52.075891  781733 config.go:182] Loaded profile config "embed-certs-630626": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:07:52.075941  781733 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:07:52.076003  781733 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-630626"
	I0111 09:07:52.076020  781733 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-630626"
	I0111 09:07:52.076047  781733 host.go:66] Checking if "embed-certs-630626" exists ...
	I0111 09:07:52.076563  781733 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:07:52.078363  781733 addons.go:70] Setting default-storageclass=true in profile "embed-certs-630626"
	I0111 09:07:52.078390  781733 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-630626"
	I0111 09:07:52.078729  781733 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:07:52.081496  781733 out.go:179] * Verifying Kubernetes components...
	I0111 09:07:52.088009  781733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:07:52.121374  781733 addons.go:239] Setting addon default-storageclass=true in "embed-certs-630626"
	I0111 09:07:52.121417  781733 host.go:66] Checking if "embed-certs-630626" exists ...
	I0111 09:07:52.121930  781733 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:07:52.125336  781733 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:07:52.127880  781733 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:07:52.127903  781733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:07:52.127988  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:52.169587  781733 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:07:52.169623  781733 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:07:52.169684  781733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:07:52.169922  781733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:07:52.198360  781733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:07:52.463520  781733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:07:52.463627  781733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 09:07:52.507314  781733 node_ready.go:35] waiting up to 6m0s for node "embed-certs-630626" to be "Ready" ...
	I0111 09:07:52.521741  781733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:07:52.610282  781733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:07:53.205265  781733 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0111 09:07:53.472284  781733 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0111 09:07:53.475119  781733 addons.go:530] duration metric: took 1.399176373s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 09:07:53.709107  781733 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-630626" context rescaled to 1 replicas
	W0111 09:07:54.509977  781733 node_ready.go:57] node "embed-certs-630626" has "Ready":"False" status (will retry)
	W0111 09:07:56.510646  781733 node_ready.go:57] node "embed-certs-630626" has "Ready":"False" status (will retry)
	W0111 09:07:58.510731  781733 node_ready.go:57] node "embed-certs-630626" has "Ready":"False" status (will retry)
	I0111 09:08:03.470270  757749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000128716s
	I0111 09:08:03.470648  757749 kubeadm.go:319] 
	I0111 09:08:03.470766  757749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 09:08:03.470824  757749 kubeadm.go:319] 	- The kubelet is not running
	I0111 09:08:03.471138  757749 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 09:08:03.471147  757749 kubeadm.go:319] 
	I0111 09:08:03.471326  757749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 09:08:03.471395  757749 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 09:08:03.471646  757749 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 09:08:03.471655  757749 kubeadm.go:319] 
	I0111 09:08:03.482608  757749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:08:03.483183  757749 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:08:03.483345  757749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:08:03.483671  757749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0111 09:08:03.483715  757749 kubeadm.go:319] 
	I0111 09:08:03.483840  757749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 09:08:03.483920  757749 kubeadm.go:403] duration metric: took 8m7.370458715s to StartCluster
	I0111 09:08:03.483996  757749 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0111 09:08:03.484109  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 09:08:03.516164  757749 cri.go:96] found id: ""
	I0111 09:08:03.516254  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.516277  757749 logs.go:284] No container was found matching "kube-apiserver"
	I0111 09:08:03.516312  757749 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0111 09:08:03.516408  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 09:08:03.545775  757749 cri.go:96] found id: ""
	I0111 09:08:03.545857  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.545881  757749 logs.go:284] No container was found matching "etcd"
	I0111 09:08:03.545915  757749 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0111 09:08:03.546011  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 09:08:03.575175  757749 cri.go:96] found id: ""
	I0111 09:08:03.575200  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.575209  757749 logs.go:284] No container was found matching "coredns"
	I0111 09:08:03.575215  757749 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0111 09:08:03.575305  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 09:08:03.601709  757749 cri.go:96] found id: ""
	I0111 09:08:03.601739  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.601748  757749 logs.go:284] No container was found matching "kube-scheduler"
	I0111 09:08:03.601755  757749 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0111 09:08:03.601813  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 09:08:03.629040  757749 cri.go:96] found id: ""
	I0111 09:08:03.629068  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.629076  757749 logs.go:284] No container was found matching "kube-proxy"
	I0111 09:08:03.629083  757749 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 09:08:03.629148  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 09:08:03.656996  757749 cri.go:96] found id: ""
	I0111 09:08:03.657025  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.657034  757749 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 09:08:03.657041  757749 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0111 09:08:03.657103  757749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 09:08:03.700703  757749 cri.go:96] found id: ""
	I0111 09:08:03.700777  757749 logs.go:282] 0 containers: []
	W0111 09:08:03.700798  757749 logs.go:284] No container was found matching "kindnet"
	I0111 09:08:03.700821  757749 logs.go:123] Gathering logs for kubelet ...
	I0111 09:08:03.700859  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 09:08:03.778858  757749 logs.go:123] Gathering logs for dmesg ...
	I0111 09:08:03.778899  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 09:08:03.798167  757749 logs.go:123] Gathering logs for describe nodes ...
	I0111 09:08:03.798197  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 09:08:03.923385  757749 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 09:08:03.914581    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.915336    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.917042    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.917378    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.918777    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 09:08:03.914581    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.915336    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.917042    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.917378    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:03.918777    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 09:08:03.923408  757749 logs.go:123] Gathering logs for CRI-O ...
	I0111 09:08:03.923423  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0111 09:08:03.960061  757749 logs.go:123] Gathering logs for container status ...
	I0111 09:08:03.960095  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 09:08:03.989973  757749 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128716s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 09:08:03.990062  757749 out.go:285] * 
	W0111 09:08:03.990262  757749 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128716s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 09:08:03.990288  757749 out.go:285] * 
	W0111 09:08:03.990561  757749 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 09:08:03.997011  757749 out.go:203] 
	W0111 09:08:04.000016  757749 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000128716s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 09:08:04.000073  757749 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 09:08:04.000094  757749 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 09:08:04.004046  757749 out.go:203] 
	
	
	==> CRI-O <==
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495186654Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495237904Z" level=info msg="Starting seccomp notifier watcher"
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495335703Z" level=info msg="Create NRI interface"
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495425821Z" level=info msg="built-in NRI default validator is disabled"
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495439261Z" level=info msg="runtime interface created"
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495452135Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495458461Z" level=info msg="runtime interface starting up..."
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495464476Z" level=info msg="starting plugins..."
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495477505Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 11 08:59:54 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:54.495549917Z" level=info msg="No systemd watchdog enabled"
	Jan 11 08:59:54 force-systemd-flag-630015 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Jan 11 08:59:56 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:56.413372561Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=113cc05d-c2ca-49ac-a31c-c1fbed38fe69 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:59:56 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:56.414358248Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=b03f62d6-2dfb-4109-8703-4f54ec5d527c name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:59:56 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:56.414789966Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=2aee6254-a936-43b7-b921-222fd6193fe4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:59:56 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:56.415170369Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=285a1be9-a423-4e3f-86fe-e7802f783fc3 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:59:56 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:56.415524072Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=7ff2f5f2-b397-459a-b02c-5356dff5dd2c name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:59:56 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:56.415870218Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=726c99d8-7dc3-4e16-bb4f-296d2996e2d6 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:59:56 force-systemd-flag-630015 crio[836]: time="2026-01-11T08:59:56.4162317Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=34101626-9c4a-4fb7-b5a7-1da3f41d05c7 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:01 force-systemd-flag-630015 crio[836]: time="2026-01-11T09:04:01.335396126Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=b0314d77-009c-4189-ac57-6efbfd042722 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:01 force-systemd-flag-630015 crio[836]: time="2026-01-11T09:04:01.336188382Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=16b88533-6499-45ec-9ffd-4d9b73ad4dd4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:01 force-systemd-flag-630015 crio[836]: time="2026-01-11T09:04:01.336760763Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=cdefff53-be5f-4226-9843-fb0b3e2a7dfe name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:01 force-systemd-flag-630015 crio[836]: time="2026-01-11T09:04:01.337307912Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=f4f808b5-e741-4673-b4db-ff5b0dd3102a name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:01 force-systemd-flag-630015 crio[836]: time="2026-01-11T09:04:01.337812739Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=992108a2-11b0-4d16-b75f-1dcfbc19c967 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:01 force-systemd-flag-630015 crio[836]: time="2026-01-11T09:04:01.338449851Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=1570807c-adb3-4044-b59b-1cd318d3f7f4 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:01 force-systemd-flag-630015 crio[836]: time="2026-01-11T09:04:01.338949722Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=038a2c1c-968b-44ce-8b87-6475ea2b6356 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 09:08:05.505954    5007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:05.509352    5007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:05.512357    5007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:05.513042    5007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:08:05.514724    5007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 09:08:05 up  3:50,  0 user,  load average: 1.99, 1.54, 1.81
	Linux force-systemd-flag-630015 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 11 09:08:02 force-systemd-flag-630015 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 09:08:03 force-systemd-flag-630015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 643.
	Jan 11 09:08:03 force-systemd-flag-630015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:08:03 force-systemd-flag-630015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:08:03 force-systemd-flag-630015 kubelet[4804]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:08:03 force-systemd-flag-630015 kubelet[4804]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:08:03 force-systemd-flag-630015 kubelet[4804]: E0111 09:08:03.489174    4804 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 09:08:03 force-systemd-flag-630015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 09:08:03 force-systemd-flag-630015 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 09:08:04 force-systemd-flag-630015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 644.
	Jan 11 09:08:04 force-systemd-flag-630015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:08:04 force-systemd-flag-630015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:08:04 force-systemd-flag-630015 kubelet[4896]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:08:04 force-systemd-flag-630015 kubelet[4896]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:08:04 force-systemd-flag-630015 kubelet[4896]: E0111 09:08:04.242264    4896 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 09:08:04 force-systemd-flag-630015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 09:08:04 force-systemd-flag-630015 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 09:08:04 force-systemd-flag-630015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Jan 11 09:08:04 force-systemd-flag-630015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:08:04 force-systemd-flag-630015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:08:05 force-systemd-flag-630015 kubelet[4924]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:08:05 force-systemd-flag-630015 kubelet[4924]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:08:05 force-systemd-flag-630015 kubelet[4924]: E0111 09:08:05.030970    4924 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 09:08:05 force-systemd-flag-630015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 09:08:05 force-systemd-flag-630015 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-630015 -n force-systemd-flag-630015
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-630015 -n force-systemd-flag-630015: exit status 6 (476.954797ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 09:08:06.171657  784937 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-630015" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-630015" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-630015" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-630015
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-630015: (2.110640333s)
--- FAIL: TestForceSystemdFlag (505.61s)

                                                
                                    
x
+
TestForceSystemdEnv (507.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-472282 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-472282 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m23.732726537s)

                                                
                                                
-- stdout --
	* [force-systemd-env-472282] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-472282" primary control-plane node in "force-systemd-env-472282" cluster
	* Pulling base image v0.0.48-1768032998-22402 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:53:19.104513  738378 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:53:19.112372  738378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:53:19.116846  738378 out.go:374] Setting ErrFile to fd 2...
	I0111 08:53:19.117086  738378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:53:19.117763  738378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:53:19.118328  738378 out.go:368] Setting JSON to false
	I0111 08:53:19.119654  738378 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12949,"bootTime":1768108650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:53:19.119782  738378 start.go:143] virtualization:  
	I0111 08:53:19.126794  738378 out.go:179] * [force-systemd-env-472282] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:53:19.132691  738378 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:53:19.132831  738378 notify.go:221] Checking for updates...
	I0111 08:53:19.145954  738378 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:53:19.153453  738378 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:53:19.157115  738378 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:53:19.160425  738378 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:53:19.163782  738378 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I0111 08:53:19.167664  738378 config.go:182] Loaded profile config "test-preload-821036": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:53:19.167869  738378 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:53:19.224307  738378 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:53:19.224434  738378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:53:19.334708  738378 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:69 SystemTime:2026-01-11 08:53:19.316461232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:53:19.334823  738378 docker.go:319] overlay module found
	I0111 08:53:19.340184  738378 out.go:179] * Using the docker driver based on user configuration
	I0111 08:53:19.343094  738378 start.go:309] selected driver: docker
	I0111 08:53:19.343116  738378 start.go:928] validating driver "docker" against <nil>
	I0111 08:53:19.343130  738378 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:53:19.343970  738378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:53:19.494963  738378 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:69 SystemTime:2026-01-11 08:53:19.481386965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:53:19.495210  738378 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:53:19.495438  738378 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:53:19.498676  738378 out.go:179] * Using Docker driver with root privileges
	I0111 08:53:19.501496  738378 cni.go:84] Creating CNI manager for ""
	I0111 08:53:19.501639  738378 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:53:19.501654  738378 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 08:53:19.501732  738378 start.go:353] cluster config:
	{Name:force-systemd-env-472282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-472282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:53:19.506881  738378 out.go:179] * Starting "force-systemd-env-472282" primary control-plane node in "force-systemd-env-472282" cluster
	I0111 08:53:19.510439  738378 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 08:53:19.513618  738378 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:53:19.519315  738378 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:53:19.519364  738378 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 08:53:19.519375  738378 cache.go:65] Caching tarball of preloaded images
	I0111 08:53:19.519478  738378 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 08:53:19.519494  738378 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 08:53:19.519592  738378 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/config.json ...
	I0111 08:53:19.519623  738378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/config.json: {Name:mk6f5b0c68dd2e9138e0ec5a62286e7c299b8133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:53:19.519775  738378 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:53:19.554748  738378 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:53:19.554769  738378 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:53:19.554789  738378 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:53:19.554821  738378 start.go:360] acquireMachinesLock for force-systemd-env-472282: {Name:mkf583ab8058cb3b9bbe6a70cc2f98589ed8a193 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:53:19.554917  738378 start.go:364] duration metric: took 80.715µs to acquireMachinesLock for "force-systemd-env-472282"
	I0111 08:53:19.554946  738378 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-472282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-472282 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 08:53:19.555011  738378 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:53:19.558545  738378 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:53:19.558820  738378 start.go:159] libmachine.API.Create for "force-systemd-env-472282" (driver="docker")
	I0111 08:53:19.558851  738378 client.go:173] LocalClient.Create starting
	I0111 08:53:19.558914  738378 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 08:53:19.558977  738378 main.go:144] libmachine: Decoding PEM data...
	I0111 08:53:19.558995  738378 main.go:144] libmachine: Parsing certificate...
	I0111 08:53:19.559035  738378 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 08:53:19.559050  738378 main.go:144] libmachine: Decoding PEM data...
	I0111 08:53:19.559060  738378 main.go:144] libmachine: Parsing certificate...
	I0111 08:53:19.559421  738378 cli_runner.go:164] Run: docker network inspect force-systemd-env-472282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:53:19.576376  738378 cli_runner.go:211] docker network inspect force-systemd-env-472282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:53:19.576468  738378 network_create.go:284] running [docker network inspect force-systemd-env-472282] to gather additional debugging logs...
	I0111 08:53:19.576485  738378 cli_runner.go:164] Run: docker network inspect force-systemd-env-472282
	W0111 08:53:19.595824  738378 cli_runner.go:211] docker network inspect force-systemd-env-472282 returned with exit code 1
	I0111 08:53:19.595851  738378 network_create.go:287] error running [docker network inspect force-systemd-env-472282]: docker network inspect force-systemd-env-472282: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-472282 not found
	I0111 08:53:19.595864  738378 network_create.go:289] output of [docker network inspect force-systemd-env-472282]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-472282 not found
	
	** /stderr **
	I0111 08:53:19.595973  738378 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:53:19.617842  738378 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
	I0111 08:53:19.618232  738378 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461c1a9d970d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:7e:25:fe:d0:0d} reservation:<nil>}
	I0111 08:53:19.618605  738378 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38e10816f85 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:42:af:ae:32:ae} reservation:<nil>}
	I0111 08:53:19.618955  738378 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f3bbc0f14ea4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:61:f6:e5:16:07} reservation:<nil>}
	I0111 08:53:19.619352  738378 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019beff0}
	I0111 08:53:19.619371  738378 network_create.go:124] attempt to create docker network force-systemd-env-472282 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 08:53:19.619429  738378 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-472282 force-systemd-env-472282
	I0111 08:53:19.727921  738378 network_create.go:108] docker network force-systemd-env-472282 192.168.85.0/24 created
	I0111 08:53:19.727951  738378 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-472282" container
	I0111 08:53:19.728034  738378 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:53:19.748591  738378 cli_runner.go:164] Run: docker volume create force-systemd-env-472282 --label name.minikube.sigs.k8s.io=force-systemd-env-472282 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:53:19.772379  738378 oci.go:103] Successfully created a docker volume force-systemd-env-472282
	I0111 08:53:19.772477  738378 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-472282-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-472282 --entrypoint /usr/bin/test -v force-systemd-env-472282:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:53:20.707240  738378 oci.go:107] Successfully prepared a docker volume force-systemd-env-472282
	I0111 08:53:20.707294  738378 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:53:20.707304  738378 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:53:20.707385  738378 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-472282:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:53:25.413313  738378 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-472282:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.705879529s)
	I0111 08:53:25.413351  738378 kic.go:203] duration metric: took 4.706043805s to extract preloaded images to volume ...
	W0111 08:53:25.413491  738378 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:53:25.413603  738378 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:53:25.495353  738378 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-472282 --name force-systemd-env-472282 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-472282 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-472282 --network force-systemd-env-472282 --ip 192.168.85.2 --volume force-systemd-env-472282:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:53:25.873924  738378 cli_runner.go:164] Run: docker container inspect force-systemd-env-472282 --format={{.State.Running}}
	I0111 08:53:25.916472  738378 cli_runner.go:164] Run: docker container inspect force-systemd-env-472282 --format={{.State.Status}}
	I0111 08:53:25.960011  738378 cli_runner.go:164] Run: docker exec force-systemd-env-472282 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:53:26.027094  738378 oci.go:144] the created container "force-systemd-env-472282" has a running status.
	I0111 08:53:26.027132  738378 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-env-472282/id_rsa...
	I0111 08:53:26.374355  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-env-472282/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:53:26.374453  738378 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-env-472282/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:53:26.410522  738378 cli_runner.go:164] Run: docker container inspect force-systemd-env-472282 --format={{.State.Status}}
	I0111 08:53:26.442228  738378 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:53:26.442252  738378 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-472282 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:53:26.510823  738378 cli_runner.go:164] Run: docker container inspect force-systemd-env-472282 --format={{.State.Status}}
	I0111 08:53:26.542553  738378 machine.go:94] provisionDockerMachine start ...
	I0111 08:53:26.542637  738378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-472282
	I0111 08:53:26.575869  738378 main.go:144] libmachine: Using SSH client type: native
	I0111 08:53:26.576222  738378 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0111 08:53:26.576238  738378 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:53:26.576942  738378 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59346->127.0.0.1:33743: read: connection reset by peer
	I0111 08:53:29.753671  738378 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-472282
	
	I0111 08:53:29.753749  738378 ubuntu.go:182] provisioning hostname "force-systemd-env-472282"
	I0111 08:53:29.753863  738378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-472282
	I0111 08:53:29.781269  738378 main.go:144] libmachine: Using SSH client type: native
	I0111 08:53:29.781577  738378 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0111 08:53:29.781588  738378 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-472282 && echo "force-systemd-env-472282" | sudo tee /etc/hostname
	I0111 08:53:29.959394  738378 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-472282
	
	I0111 08:53:29.959504  738378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-472282
	I0111 08:53:29.986298  738378 main.go:144] libmachine: Using SSH client type: native
	I0111 08:53:29.986632  738378 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0111 08:53:29.986658  738378 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-472282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-472282/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-472282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:53:30.162983  738378 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:53:30.163032  738378 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 08:53:30.163066  738378 ubuntu.go:190] setting up certificates
	I0111 08:53:30.163081  738378 provision.go:84] configureAuth start
	I0111 08:53:30.163145  738378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-472282
	I0111 08:53:30.188182  738378 provision.go:143] copyHostCerts
	I0111 08:53:30.188224  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 08:53:30.188373  738378 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 08:53:30.188386  738378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 08:53:30.188532  738378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 08:53:30.188709  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 08:53:30.188736  738378 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 08:53:30.188781  738378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 08:53:30.188868  738378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 08:53:30.188989  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 08:53:30.189048  738378 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 08:53:30.189053  738378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 08:53:30.189130  738378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 08:53:30.189278  738378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-472282 san=[127.0.0.1 192.168.85.2 force-systemd-env-472282 localhost minikube]
	I0111 08:53:30.478629  738378 provision.go:177] copyRemoteCerts
	I0111 08:53:30.478748  738378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:53:30.478835  738378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-472282
	I0111 08:53:30.496850  738378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-env-472282/id_rsa Username:docker}
	I0111 08:53:30.612345  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:53:30.612402  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 08:53:30.635884  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:53:30.635943  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:53:30.656739  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:53:30.656801  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0111 08:53:30.677357  738378 provision.go:87] duration metric: took 514.251718ms to configureAuth
	I0111 08:53:30.677438  738378 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:53:30.677667  738378 config.go:182] Loaded profile config "force-systemd-env-472282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:53:30.677822  738378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-472282
	I0111 08:53:30.696210  738378 main.go:144] libmachine: Using SSH client type: native
	I0111 08:53:30.696526  738378 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33743 <nil> <nil>}
	I0111 08:53:30.696540  738378 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 08:53:31.047307  738378 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 08:53:31.047332  738378 machine.go:97] duration metric: took 4.504757606s to provisionDockerMachine
	I0111 08:53:31.047353  738378 client.go:176] duration metric: took 11.488496425s to LocalClient.Create
	I0111 08:53:31.047367  738378 start.go:167] duration metric: took 11.488549644s to libmachine.API.Create "force-systemd-env-472282"
	I0111 08:53:31.047378  738378 start.go:293] postStartSetup for "force-systemd-env-472282" (driver="docker")
	I0111 08:53:31.047391  738378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:53:31.047454  738378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:53:31.047495  738378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-472282
	I0111 08:53:31.068441  738378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-env-472282/id_rsa Username:docker}
	I0111 08:53:31.177533  738378 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:53:31.183778  738378 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:53:31.183808  738378 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:53:31.183820  738378 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 08:53:31.183881  738378 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 08:53:31.183974  738378 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 08:53:31.183986  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> /etc/ssl/certs/5769072.pem
	I0111 08:53:31.184090  738378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:53:31.198612  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:53:31.229869  738378 start.go:296] duration metric: took 182.473821ms for postStartSetup
	I0111 08:53:31.230375  738378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-472282
	I0111 08:53:31.255658  738378 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/config.json ...
	I0111 08:53:31.255937  738378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:53:31.255980  738378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-472282
	I0111 08:53:31.294354  738378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-env-472282/id_rsa Username:docker}
	I0111 08:53:31.411899  738378 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:53:31.417078  738378 start.go:128] duration metric: took 11.862051395s to createHost
	I0111 08:53:31.417108  738378 start.go:83] releasing machines lock for "force-systemd-env-472282", held for 11.862175573s
	I0111 08:53:31.417189  738378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-472282
	I0111 08:53:31.443297  738378 ssh_runner.go:195] Run: cat /version.json
	I0111 08:53:31.443356  738378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-472282
	I0111 08:53:31.443672  738378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:53:31.443720  738378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-472282
	I0111 08:53:31.477363  738378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-env-472282/id_rsa Username:docker}
	I0111 08:53:31.486253  738378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33743 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-env-472282/id_rsa Username:docker}
	I0111 08:53:31.594362  738378 ssh_runner.go:195] Run: systemctl --version
	I0111 08:53:31.724849  738378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 08:53:31.785185  738378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:53:31.791388  738378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:53:31.791471  738378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:53:31.828553  738378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:53:31.828574  738378 start.go:496] detecting cgroup driver to use...
	I0111 08:53:31.828603  738378 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:53:31.828655  738378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:53:31.854368  738378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:53:31.878674  738378 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:53:31.878792  738378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:53:31.898526  738378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:53:31.923714  738378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:53:32.083398  738378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:53:32.208891  738378 docker.go:234] disabling docker service ...
	I0111 08:53:32.209007  738378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:53:32.231437  738378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:53:32.244990  738378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:53:32.355583  738378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:53:32.476207  738378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:53:32.489361  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:53:32.503772  738378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 08:53:32.503888  738378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:53:32.520669  738378 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 08:53:32.520742  738378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:53:32.542635  738378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:53:32.555741  738378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:53:32.568089  738378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:53:32.583414  738378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:53:32.593292  738378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:53:32.609135  738378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:53:32.618712  738378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:53:32.628461  738378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:53:32.637254  738378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:53:32.804944  738378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 08:53:33.034178  738378 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 08:53:33.034267  738378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 08:53:33.038204  738378 start.go:574] Will wait 60s for crictl version
	I0111 08:53:33.038284  738378 ssh_runner.go:195] Run: which crictl
	I0111 08:53:33.041650  738378 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:53:33.073511  738378 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 08:53:33.073654  738378 ssh_runner.go:195] Run: crio --version
	I0111 08:53:33.120434  738378 ssh_runner.go:195] Run: crio --version
	I0111 08:53:33.167611  738378 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 08:53:33.170521  738378 cli_runner.go:164] Run: docker network inspect force-systemd-env-472282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:53:33.194574  738378 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 08:53:33.201160  738378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:53:33.211918  738378 kubeadm.go:884] updating cluster {Name:force-systemd-env-472282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-472282 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:53:33.212039  738378 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:53:33.212094  738378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:53:33.274863  738378 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:53:33.274885  738378 crio.go:433] Images already preloaded, skipping extraction
	I0111 08:53:33.274938  738378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:53:33.313634  738378 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:53:33.313712  738378 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:53:33.313735  738378 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 08:53:33.313868  738378 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-472282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-472282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:53:33.313989  738378 ssh_runner.go:195] Run: crio config
	I0111 08:53:33.380695  738378 cni.go:84] Creating CNI manager for ""
	I0111 08:53:33.380727  738378 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:53:33.380746  738378 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:53:33.380779  738378 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-472282 NodeName:force-systemd-env-472282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:53:33.380932  738378 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-472282"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:53:33.381017  738378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:53:33.392468  738378 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:53:33.392556  738378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:53:33.401038  738378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0111 08:53:33.424921  738378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:53:33.441544  738378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I0111 08:53:33.454765  738378 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:53:33.459087  738378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:53:33.468592  738378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:53:33.617426  738378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:53:33.634200  738378 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282 for IP: 192.168.85.2
	I0111 08:53:33.634224  738378 certs.go:195] generating shared ca certs ...
	I0111 08:53:33.634270  738378 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:53:33.634444  738378 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 08:53:33.634498  738378 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 08:53:33.634511  738378 certs.go:257] generating profile certs ...
	I0111 08:53:33.634580  738378 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/client.key
	I0111 08:53:33.634605  738378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/client.crt with IP's: []
	I0111 08:53:33.976400  738378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/client.crt ...
	I0111 08:53:33.976433  738378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/client.crt: {Name:mkdce1fbee1df008b40f044794983ddace589981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:53:33.976627  738378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/client.key ...
	I0111 08:53:33.976641  738378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/client.key: {Name:mk4578f1e9d3e37e3456946f26565acdff25169d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:53:33.976733  738378 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.key.290489c3
	I0111 08:53:33.976750  738378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.crt.290489c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 08:53:34.359318  738378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.crt.290489c3 ...
	I0111 08:53:34.359350  738378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.crt.290489c3: {Name:mkf4faf6d8ae63a1663368e2f9c14399f8e6d31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:53:34.359542  738378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.key.290489c3 ...
	I0111 08:53:34.359559  738378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.key.290489c3: {Name:mk1d56b7fa75edb1f96ea6b62cef71ff719d22c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:53:34.359641  738378 certs.go:382] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.crt.290489c3 -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.crt
	I0111 08:53:34.359730  738378 certs.go:386] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.key.290489c3 -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.key
	I0111 08:53:34.359791  738378 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.key
	I0111 08:53:34.359809  738378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.crt with IP's: []
	I0111 08:53:35.016031  738378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.crt ...
	I0111 08:53:35.016111  738378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.crt: {Name:mk0dcaf57ed53d669542868cac629b1df10f0bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:53:35.016371  738378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.key ...
	I0111 08:53:35.016412  738378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.key: {Name:mkbe8d5c75d0a23b73d990e3be0bc4379b23cd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:53:35.016560  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:53:35.016607  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:53:35.016649  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:53:35.016692  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:53:35.016727  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:53:35.016761  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:53:35.016808  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:53:35.016845  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:53:35.016941  738378 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 08:53:35.017015  738378 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 08:53:35.017054  738378 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:53:35.017109  738378 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:53:35.017169  738378 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:53:35.017227  738378 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 08:53:35.017315  738378 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:53:35.017375  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> /usr/share/ca-certificates/5769072.pem
	I0111 08:53:35.017403  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:53:35.017452  738378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem -> /usr/share/ca-certificates/576907.pem
	I0111 08:53:35.018102  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:53:35.042445  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 08:53:35.065156  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:53:35.083992  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:53:35.107433  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:53:35.128970  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 08:53:35.159173  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:53:35.194520  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-env-472282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 08:53:35.223043  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 08:53:35.267652  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:53:35.296242  738378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 08:53:35.315151  738378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:53:35.328563  738378 ssh_runner.go:195] Run: openssl version
	I0111 08:53:35.335589  738378 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 08:53:35.343194  738378 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 08:53:35.350909  738378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 08:53:35.355106  738378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 08:53:35.355184  738378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 08:53:35.399091  738378 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:53:35.407248  738378 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5769072.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:53:35.414947  738378 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:53:35.422995  738378 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:53:35.430669  738378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:53:35.434602  738378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:53:35.434698  738378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:53:35.485574  738378 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:53:35.493158  738378 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:53:35.501037  738378 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 08:53:35.513908  738378 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 08:53:35.522969  738378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 08:53:35.528144  738378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 08:53:35.528218  738378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 08:53:35.582788  738378 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:53:35.590280  738378 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/576907.pem /etc/ssl/certs/51391683.0
	I0111 08:53:35.597669  738378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:53:35.602017  738378 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:53:35.602074  738378 kubeadm.go:401] StartCluster: {Name:force-systemd-env-472282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-472282 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:53:35.602160  738378 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:53:35.602233  738378 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:53:35.631393  738378 cri.go:96] found id: ""
	I0111 08:53:35.631476  738378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:53:35.641443  738378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:53:35.649169  738378 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:53:35.649242  738378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:53:35.659514  738378 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:53:35.659536  738378 kubeadm.go:158] found existing configuration files:
	
	I0111 08:53:35.659587  738378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:53:35.667710  738378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:53:35.667779  738378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:53:35.676367  738378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:53:35.685034  738378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:53:35.685110  738378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:53:35.698120  738378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:53:35.710892  738378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:53:35.710963  738378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:53:35.721534  738378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:53:35.730324  738378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:53:35.730393  738378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:53:35.737894  738378 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:53:35.792655  738378 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:53:35.793589  738378 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:53:35.912338  738378 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:53:35.912430  738378 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:53:35.912471  738378 kubeadm.go:319] OS: Linux
	I0111 08:53:35.912521  738378 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:53:35.912573  738378 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:53:35.912625  738378 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:53:35.912677  738378 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:53:35.912727  738378 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:53:35.912780  738378 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:53:35.912828  738378 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:53:35.912905  738378 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:53:35.912960  738378 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:53:36.019639  738378 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:53:36.019755  738378 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:53:36.019852  738378 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:53:36.038591  738378 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:53:36.041825  738378 out.go:252]   - Generating certificates and keys ...
	I0111 08:53:36.041991  738378 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:53:36.042097  738378 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:53:36.195964  738378 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:53:36.280712  738378 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:53:36.700010  738378 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:53:37.247151  738378 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:53:37.556081  738378 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:53:37.556386  738378 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-472282 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:53:38.191085  738378 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:53:38.191463  738378 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-472282 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:53:38.261140  738378 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:53:38.351908  738378 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:53:38.869016  738378 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:53:38.869390  738378 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:53:39.111009  738378 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:53:39.304389  738378 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:53:39.394150  738378 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:53:39.519456  738378 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:53:39.734510  738378 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:53:39.736679  738378 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:53:39.746344  738378 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:53:39.749651  738378 out.go:252]   - Booting up control plane ...
	I0111 08:53:39.749762  738378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:53:39.749843  738378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:53:39.749909  738378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:53:39.783364  738378 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:53:39.783474  738378 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:53:39.798371  738378 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:53:39.798473  738378 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:53:39.801003  738378 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:53:39.993711  738378 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:53:39.993836  738378 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:57:39.995008  738378 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001033617s
	I0111 08:57:39.995043  738378 kubeadm.go:319] 
	I0111 08:57:39.995150  738378 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:57:39.995209  738378 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:57:39.995608  738378 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:57:39.995622  738378 kubeadm.go:319] 
	I0111 08:57:39.995951  738378 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:57:39.996010  738378 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:57:39.996064  738378 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:57:39.996070  738378 kubeadm.go:319] 
	I0111 08:57:40.001006  738378 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:57:40.001456  738378 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:57:40.001570  738378 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:57:40.001803  738378 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:57:40.001808  738378 kubeadm.go:319] 
	W0111 08:57:40.002016  738378 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-472282 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-472282 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001033617s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-472282 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-472282 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001033617s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 08:57:40.002108  738378 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0111 08:57:40.002449  738378 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:57:40.432482  738378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:57:40.445442  738378 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:57:40.445509  738378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:57:40.453550  738378 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:57:40.453574  738378 kubeadm.go:158] found existing configuration files:
	
	I0111 08:57:40.453673  738378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:57:40.461795  738378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:57:40.461861  738378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:57:40.472482  738378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:57:40.480747  738378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:57:40.480813  738378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:57:40.488311  738378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:57:40.496213  738378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:57:40.496302  738378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:57:40.503545  738378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:57:40.511343  738378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:57:40.511407  738378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:57:40.520870  738378 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:57:40.561707  738378 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:57:40.561767  738378 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:57:40.640168  738378 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:57:40.640246  738378 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:57:40.640288  738378 kubeadm.go:319] OS: Linux
	I0111 08:57:40.640338  738378 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:57:40.640391  738378 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:57:40.640441  738378 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:57:40.640493  738378 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:57:40.640544  738378 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:57:40.640596  738378 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:57:40.640644  738378 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:57:40.640698  738378 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:57:40.640746  738378 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:57:40.711442  738378 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:57:40.711567  738378 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:57:40.711666  738378 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:57:40.718809  738378 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:57:40.724301  738378 out.go:252]   - Generating certificates and keys ...
	I0111 08:57:40.724446  738378 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:57:40.724544  738378 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:57:40.724673  738378 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 08:57:40.724760  738378 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 08:57:40.724855  738378 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 08:57:40.724991  738378 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 08:57:40.725122  738378 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 08:57:40.725216  738378 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 08:57:40.725353  738378 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 08:57:40.725473  738378 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 08:57:40.725544  738378 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 08:57:40.725634  738378 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:57:40.882476  738378 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:57:41.239847  738378 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:57:41.469426  738378 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:57:41.603883  738378 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:57:41.914172  738378 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:57:41.914981  738378 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:57:41.917658  738378 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:57:41.921023  738378 out.go:252]   - Booting up control plane ...
	I0111 08:57:41.921144  738378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:57:41.921234  738378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:57:41.921312  738378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:57:41.936452  738378 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:57:41.936564  738378 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:57:41.946094  738378 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:57:41.946599  738378 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:57:41.946942  738378 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:57:42.086658  738378 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:57:42.086781  738378 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 09:01:42.086216  738378 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001316385s
	I0111 09:01:42.086247  738378 kubeadm.go:319] 
	I0111 09:01:42.086306  738378 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 09:01:42.086340  738378 kubeadm.go:319] 	- The kubelet is not running
	I0111 09:01:42.086448  738378 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 09:01:42.086452  738378 kubeadm.go:319] 
	I0111 09:01:42.086568  738378 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 09:01:42.086602  738378 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 09:01:42.086633  738378 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 09:01:42.086637  738378 kubeadm.go:319] 
	I0111 09:01:42.090433  738378 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:01:42.090849  738378 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:01:42.090960  738378 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:01:42.091187  738378 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 09:01:42.091201  738378 kubeadm.go:319] 
	I0111 09:01:42.091267  738378 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 09:01:42.091329  738378 kubeadm.go:403] duration metric: took 8m6.489260005s to StartCluster
	I0111 09:01:42.091391  738378 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0111 09:01:42.091469  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 09:01:42.122461  738378 cri.go:96] found id: ""
	I0111 09:01:42.122557  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.122568  738378 logs.go:284] No container was found matching "kube-apiserver"
	I0111 09:01:42.122577  738378 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0111 09:01:42.122661  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 09:01:42.155907  738378 cri.go:96] found id: ""
	I0111 09:01:42.155938  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.155948  738378 logs.go:284] No container was found matching "etcd"
	I0111 09:01:42.155956  738378 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0111 09:01:42.156021  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 09:01:42.188652  738378 cri.go:96] found id: ""
	I0111 09:01:42.188676  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.188687  738378 logs.go:284] No container was found matching "coredns"
	I0111 09:01:42.188694  738378 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0111 09:01:42.188760  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 09:01:42.219107  738378 cri.go:96] found id: ""
	I0111 09:01:42.219132  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.219142  738378 logs.go:284] No container was found matching "kube-scheduler"
	I0111 09:01:42.219148  738378 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0111 09:01:42.219220  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 09:01:42.248491  738378 cri.go:96] found id: ""
	I0111 09:01:42.248515  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.248524  738378 logs.go:284] No container was found matching "kube-proxy"
	I0111 09:01:42.248531  738378 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 09:01:42.248599  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 09:01:42.276176  738378 cri.go:96] found id: ""
	I0111 09:01:42.276204  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.276214  738378 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 09:01:42.276221  738378 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0111 09:01:42.276305  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 09:01:42.305146  738378 cri.go:96] found id: ""
	I0111 09:01:42.305179  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.305191  738378 logs.go:284] No container was found matching "kindnet"
	I0111 09:01:42.305203  738378 logs.go:123] Gathering logs for kubelet ...
	I0111 09:01:42.305217  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 09:01:42.373210  738378 logs.go:123] Gathering logs for dmesg ...
	I0111 09:01:42.373251  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 09:01:42.391875  738378 logs.go:123] Gathering logs for describe nodes ...
	I0111 09:01:42.391906  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 09:01:42.519857  738378 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 09:01:42.510570    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.511451    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.513051    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.513367    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.516015    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 09:01:42.510570    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.511451    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.513051    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.513367    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.516015    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 09:01:42.519883  738378 logs.go:123] Gathering logs for CRI-O ...
	I0111 09:01:42.519896  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0111 09:01:42.555199  738378 logs.go:123] Gathering logs for container status ...
	I0111 09:01:42.555237  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 09:01:42.587835  738378 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316385s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 09:01:42.587946  738378 out.go:285] * 
	* 
	W0111 09:01:42.588182  738378 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316385s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316385s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 09:01:42.588197  738378 out.go:285] * 
	* 
	W0111 09:01:42.588452  738378 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 09:01:42.594611  738378 out.go:203] 
	W0111 09:01:42.597448  738378 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316385s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316385s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 09:01:42.597510  738378 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 09:01:42.597536  738378 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 09:01:42.600575  738378 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-472282 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2026-01-11 09:01:42.663173286 +0000 UTC m=+2887.449003498
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-472282
helpers_test.go:244: (dbg) docker inspect force-systemd-env-472282:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13904923ccb3d0f050068cc8b6afd9f7055f386736daa65710f1111cba22ede1",
	        "Created": "2026-01-11T08:53:25.520451264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 739008,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T08:53:25.600978157Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/13904923ccb3d0f050068cc8b6afd9f7055f386736daa65710f1111cba22ede1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13904923ccb3d0f050068cc8b6afd9f7055f386736daa65710f1111cba22ede1/hostname",
	        "HostsPath": "/var/lib/docker/containers/13904923ccb3d0f050068cc8b6afd9f7055f386736daa65710f1111cba22ede1/hosts",
	        "LogPath": "/var/lib/docker/containers/13904923ccb3d0f050068cc8b6afd9f7055f386736daa65710f1111cba22ede1/13904923ccb3d0f050068cc8b6afd9f7055f386736daa65710f1111cba22ede1-json.log",
	        "Name": "/force-systemd-env-472282",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-472282:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-472282",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13904923ccb3d0f050068cc8b6afd9f7055f386736daa65710f1111cba22ede1",
	                "LowerDir": "/var/lib/docker/overlay2/2658b22594049053f3844c18468f74107f5be1b78994cae7a1cc7480e56dad10-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2658b22594049053f3844c18468f74107f5be1b78994cae7a1cc7480e56dad10/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2658b22594049053f3844c18468f74107f5be1b78994cae7a1cc7480e56dad10/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2658b22594049053f3844c18468f74107f5be1b78994cae7a1cc7480e56dad10/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-472282",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-472282/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-472282",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-472282",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-472282",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d2f365da9b38b8103b7be33081481a5957fbfe74bfd2a751fdd7b6819df2102",
	            "SandboxKey": "/var/run/docker/netns/2d2f365da9b3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33743"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33744"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33747"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33745"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33746"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-472282": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:96:77:3b:f8:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77b43b72205f738f9651b8aa5401584e2d46877fe21465e10bb0ad3780a8a7bf",
	                    "EndpointID": "9006c400f3751fb618455752c16d8d32aaec4b31695a611d0b81365e20a275bd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-472282",
	                        "13904923ccb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-472282 -n force-systemd-env-472282
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-472282 -n force-systemd-env-472282: exit status 6 (325.586103ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 09:01:43.003595  761370 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-472282" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-472282 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-293572 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl cat docker --no-pager                                                                       │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /etc/docker/daemon.json                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo docker system info                                                                                    │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cri-dockerd --version                                                                                 │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl cat containerd --no-pager                                                                   │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /etc/containerd/config.toml                                                                       │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo containerd config dump                                                                                │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl cat crio --no-pager                                                                         │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo crio config                                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ delete  │ -p cilium-293572                                                                                                            │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:55 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:56 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                   │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ delete  │ -p cert-expiration-448134                                                                                                   │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ start   │ -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-630015 │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:59:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:59:42.727417  757749 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:59:42.727561  757749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:59:42.727572  757749 out.go:374] Setting ErrFile to fd 2...
	I0111 08:59:42.727586  757749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:59:42.728228  757749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:59:42.728668  757749 out.go:368] Setting JSON to false
	I0111 08:59:42.729495  757749 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13333,"bootTime":1768108650,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:59:42.729566  757749 start.go:143] virtualization:  
	I0111 08:59:42.733115  757749 out.go:179] * [force-systemd-flag-630015] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:59:42.737693  757749 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:59:42.737848  757749 notify.go:221] Checking for updates...
	I0111 08:59:42.744442  757749 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:59:42.747693  757749 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:59:42.750736  757749 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:59:42.753823  757749 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:59:42.756890  757749 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:59:42.760416  757749 config.go:182] Loaded profile config "force-systemd-env-472282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:59:42.760588  757749 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:59:42.790965  757749 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:59:42.791085  757749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:59:42.861581  757749 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:59:42.852220532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:59:42.861689  757749 docker.go:319] overlay module found
	I0111 08:59:42.864936  757749 out.go:179] * Using the docker driver based on user configuration
	I0111 08:59:42.867895  757749 start.go:309] selected driver: docker
	I0111 08:59:42.867917  757749 start.go:928] validating driver "docker" against <nil>
	I0111 08:59:42.867931  757749 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:59:42.868689  757749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:59:42.919077  757749 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:59:42.910323157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:59:42.919231  757749 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:59:42.919447  757749 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:59:42.922401  757749 out.go:179] * Using Docker driver with root privileges
	I0111 08:59:42.925202  757749 cni.go:84] Creating CNI manager for ""
	I0111 08:59:42.925268  757749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:59:42.925281  757749 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 08:59:42.925365  757749 start.go:353] cluster config:
	{Name:force-systemd-flag-630015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:59:42.928567  757749 out.go:179] * Starting "force-systemd-flag-630015" primary control-plane node in "force-systemd-flag-630015" cluster
	I0111 08:59:42.931559  757749 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 08:59:42.934559  757749 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:59:42.937344  757749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:59:42.937397  757749 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 08:59:42.937410  757749 cache.go:65] Caching tarball of preloaded images
	I0111 08:59:42.937419  757749 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:59:42.937493  757749 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 08:59:42.937502  757749 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 08:59:42.937610  757749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/config.json ...
	I0111 08:59:42.937627  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/config.json: {Name:mk0f6d2032b48bd70b430b3196c0a86321d46383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:42.957103  757749 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:59:42.957122  757749 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:59:42.957143  757749 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:59:42.957177  757749 start.go:360] acquireMachinesLock for force-systemd-flag-630015: {Name:mk67b8ec2d0abace4db1e232ffdec873308880be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:59:42.957297  757749 start.go:364] duration metric: took 103.657µs to acquireMachinesLock for "force-systemd-flag-630015"
	I0111 08:59:42.957334  757749 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-630015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 08:59:42.957396  757749 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:59:42.962712  757749 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:59:42.962962  757749 start.go:159] libmachine.API.Create for "force-systemd-flag-630015" (driver="docker")
	I0111 08:59:42.963000  757749 client.go:173] LocalClient.Create starting
	I0111 08:59:42.963087  757749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 08:59:42.963129  757749 main.go:144] libmachine: Decoding PEM data...
	I0111 08:59:42.963148  757749 main.go:144] libmachine: Parsing certificate...
	I0111 08:59:42.963203  757749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 08:59:42.963227  757749 main.go:144] libmachine: Decoding PEM data...
	I0111 08:59:42.963242  757749 main.go:144] libmachine: Parsing certificate...
	I0111 08:59:42.963605  757749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-630015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:59:42.982107  757749 cli_runner.go:211] docker network inspect force-systemd-flag-630015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:59:42.982245  757749 network_create.go:284] running [docker network inspect force-systemd-flag-630015] to gather additional debugging logs...
	I0111 08:59:42.982269  757749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-630015
	W0111 08:59:42.998168  757749 cli_runner.go:211] docker network inspect force-systemd-flag-630015 returned with exit code 1
	I0111 08:59:42.998198  757749 network_create.go:287] error running [docker network inspect force-systemd-flag-630015]: docker network inspect force-systemd-flag-630015: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-630015 not found
	I0111 08:59:42.998210  757749 network_create.go:289] output of [docker network inspect force-systemd-flag-630015]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-630015 not found
	
	** /stderr **
	I0111 08:59:42.998312  757749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:59:43.016102  757749 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
	I0111 08:59:43.016386  757749 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461c1a9d970d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:7e:25:fe:d0:0d} reservation:<nil>}
	I0111 08:59:43.016676  757749 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38e10816f85 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:42:af:ae:32:ae} reservation:<nil>}
	I0111 08:59:43.017092  757749 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a74e0}
	I0111 08:59:43.017113  757749 network_create.go:124] attempt to create docker network force-systemd-flag-630015 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0111 08:59:43.017177  757749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-630015 force-systemd-flag-630015
	I0111 08:59:43.083963  757749 network_create.go:108] docker network force-systemd-flag-630015 192.168.76.0/24 created
	I0111 08:59:43.084000  757749 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-630015" container
	I0111 08:59:43.084074  757749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:59:43.099784  757749 cli_runner.go:164] Run: docker volume create force-systemd-flag-630015 --label name.minikube.sigs.k8s.io=force-systemd-flag-630015 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:59:43.117375  757749 oci.go:103] Successfully created a docker volume force-systemd-flag-630015
	I0111 08:59:43.117474  757749 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-630015-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-630015 --entrypoint /usr/bin/test -v force-systemd-flag-630015:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:59:43.653740  757749 oci.go:107] Successfully prepared a docker volume force-systemd-flag-630015
	I0111 08:59:43.653820  757749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:59:43.653835  757749 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:59:43.653909  757749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-630015:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:59:47.671172  757749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-630015:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.01722166s)
	I0111 08:59:47.671207  757749 kic.go:203] duration metric: took 4.017368213s to extract preloaded images to volume ...
	W0111 08:59:47.671363  757749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:59:47.671476  757749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:59:47.759154  757749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-630015 --name force-systemd-flag-630015 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-630015 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-630015 --network force-systemd-flag-630015 --ip 192.168.76.2 --volume force-systemd-flag-630015:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:59:48.042014  757749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-630015 --format={{.State.Running}}
	I0111 08:59:48.064767  757749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-630015 --format={{.State.Status}}
	I0111 08:59:48.086545  757749 cli_runner.go:164] Run: docker exec force-systemd-flag-630015 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:59:48.140325  757749 oci.go:144] the created container "force-systemd-flag-630015" has a running status.
	I0111 08:59:48.140359  757749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa...
	I0111 08:59:48.520390  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:59:48.520492  757749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:59:48.542198  757749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-630015 --format={{.State.Status}}
	I0111 08:59:48.565991  757749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:59:48.566011  757749 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-630015 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:59:48.612330  757749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-630015 --format={{.State.Status}}
	I0111 08:59:48.628891  757749 machine.go:94] provisionDockerMachine start ...
	I0111 08:59:48.628993  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:48.645700  757749 main.go:144] libmachine: Using SSH client type: native
	I0111 08:59:48.646041  757749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I0111 08:59:48.646051  757749 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:59:48.646642  757749 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42388->127.0.0.1:33773: read: connection reset by peer
	I0111 08:59:51.793849  757749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-630015
	
	I0111 08:59:51.793877  757749 ubuntu.go:182] provisioning hostname "force-systemd-flag-630015"
	I0111 08:59:51.793953  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:51.811563  757749 main.go:144] libmachine: Using SSH client type: native
	I0111 08:59:51.811887  757749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I0111 08:59:51.811906  757749 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-630015 && echo "force-systemd-flag-630015" | sudo tee /etc/hostname
	I0111 08:59:51.971825  757749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-630015
	
	I0111 08:59:51.971905  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:51.989588  757749 main.go:144] libmachine: Using SSH client type: native
	I0111 08:59:51.989887  757749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I0111 08:59:51.989903  757749 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-630015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-630015/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-630015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:59:52.138548  757749 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:59:52.138622  757749 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 08:59:52.138668  757749 ubuntu.go:190] setting up certificates
	I0111 08:59:52.138706  757749 provision.go:84] configureAuth start
	I0111 08:59:52.138857  757749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-630015
	I0111 08:59:52.156291  757749 provision.go:143] copyHostCerts
	I0111 08:59:52.156331  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 08:59:52.156362  757749 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 08:59:52.156369  757749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 08:59:52.156446  757749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 08:59:52.156553  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 08:59:52.156571  757749 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 08:59:52.156575  757749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 08:59:52.156601  757749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 08:59:52.156647  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 08:59:52.156663  757749 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 08:59:52.156667  757749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 08:59:52.156690  757749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 08:59:52.156742  757749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-630015 san=[127.0.0.1 192.168.76.2 force-systemd-flag-630015 localhost minikube]
	I0111 08:59:52.313813  757749 provision.go:177] copyRemoteCerts
	I0111 08:59:52.313905  757749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:59:52.313953  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:52.331908  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:52.433923  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:59:52.433991  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0111 08:59:52.451886  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:59:52.451956  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 08:59:52.469576  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:59:52.469641  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:59:52.487904  757749 provision.go:87] duration metric: took 349.153797ms to configureAuth
	I0111 08:59:52.487977  757749 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:59:52.488194  757749 config.go:182] Loaded profile config "force-systemd-flag-630015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:59:52.488340  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:52.505713  757749 main.go:144] libmachine: Using SSH client type: native
	I0111 08:59:52.506048  757749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I0111 08:59:52.506068  757749 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 08:59:52.814443  757749 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 08:59:52.814474  757749 machine.go:97] duration metric: took 4.185563775s to provisionDockerMachine
	I0111 08:59:52.814490  757749 client.go:176] duration metric: took 9.851475453s to LocalClient.Create
	I0111 08:59:52.814505  757749 start.go:167] duration metric: took 9.851544237s to libmachine.API.Create "force-systemd-flag-630015"
	I0111 08:59:52.814526  757749 start.go:293] postStartSetup for "force-systemd-flag-630015" (driver="docker")
	I0111 08:59:52.814541  757749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:59:52.814618  757749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:59:52.814665  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:52.832489  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:52.939674  757749 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:59:52.943756  757749 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:59:52.943791  757749 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:59:52.943803  757749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 08:59:52.943856  757749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 08:59:52.943945  757749 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 08:59:52.943958  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> /etc/ssl/certs/5769072.pem
	I0111 08:59:52.944054  757749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:59:52.952265  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:59:52.972869  757749 start.go:296] duration metric: took 158.324009ms for postStartSetup
	I0111 08:59:52.973257  757749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-630015
	I0111 08:59:52.992259  757749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/config.json ...
	I0111 08:59:52.992555  757749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:59:52.992608  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:53.012475  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:53.115039  757749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:59:53.119651  757749 start.go:128] duration metric: took 10.162240712s to createHost
	I0111 08:59:53.119684  757749 start.go:83] releasing machines lock for "force-systemd-flag-630015", held for 10.162376755s
	I0111 08:59:53.119758  757749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-630015
	I0111 08:59:53.136913  757749 ssh_runner.go:195] Run: cat /version.json
	I0111 08:59:53.136928  757749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:59:53.136967  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:53.136987  757749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-630015
	I0111 08:59:53.158664  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:53.167698  757749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/force-systemd-flag-630015/id_rsa Username:docker}
	I0111 08:59:53.357632  757749 ssh_runner.go:195] Run: systemctl --version
	I0111 08:59:53.364331  757749 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 08:59:53.399763  757749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:59:53.404278  757749 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:59:53.404352  757749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:59:53.432690  757749 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:59:53.432764  757749 start.go:496] detecting cgroup driver to use...
	I0111 08:59:53.432793  757749 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:59:53.432900  757749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:59:53.450764  757749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:59:53.464083  757749 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:59:53.464169  757749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:59:53.482589  757749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:59:53.502225  757749 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:59:53.632454  757749 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:59:53.783752  757749 docker.go:234] disabling docker service ...
	I0111 08:59:53.783831  757749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:59:53.803904  757749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:59:53.816845  757749 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:59:53.949800  757749 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:59:54.075042  757749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:59:54.088917  757749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:59:54.102807  757749 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 08:59:54.102916  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.111777  757749 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0111 08:59:54.111850  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.121055  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.130181  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.139689  757749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:59:54.147555  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.156607  757749 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.170180  757749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:59:54.179370  757749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:59:54.186830  757749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:59:54.194205  757749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:59:54.320996  757749 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 08:59:54.501839  757749 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 08:59:54.501932  757749 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 08:59:54.505960  757749 start.go:574] Will wait 60s for crictl version
	I0111 08:59:54.506076  757749 ssh_runner.go:195] Run: which crictl
	I0111 08:59:54.509495  757749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:59:54.534254  757749 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 08:59:54.534373  757749 ssh_runner.go:195] Run: crio --version
	I0111 08:59:54.561533  757749 ssh_runner.go:195] Run: crio --version
	I0111 08:59:54.596514  757749 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 08:59:54.599399  757749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-630015 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:59:54.615416  757749 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 08:59:54.619390  757749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:59:54.628960  757749 kubeadm.go:884] updating cluster {Name:force-systemd-flag-630015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:59:54.629077  757749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:59:54.629127  757749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:59:54.671757  757749 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:59:54.671787  757749 crio.go:433] Images already preloaded, skipping extraction
	I0111 08:59:54.671847  757749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:59:54.702634  757749 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:59:54.702658  757749 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:59:54.702667  757749 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0111 08:59:54.702759  757749 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-630015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:59:54.702850  757749 ssh_runner.go:195] Run: crio config
	I0111 08:59:54.756410  757749 cni.go:84] Creating CNI manager for ""
	I0111 08:59:54.756434  757749 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:59:54.756487  757749 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:59:54.756520  757749 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-630015 NodeName:force-systemd-flag-630015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:59:54.756664  757749 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-630015"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:59:54.756743  757749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:59:54.764408  757749 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:59:54.764507  757749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:59:54.772131  757749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0111 08:59:54.784827  757749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:59:54.798337  757749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0111 08:59:54.811438  757749 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:59:54.815095  757749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:59:54.825510  757749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:59:54.952595  757749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:59:54.969787  757749 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015 for IP: 192.168.76.2
	I0111 08:59:54.969851  757749 certs.go:195] generating shared ca certs ...
	I0111 08:59:54.969884  757749 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:54.970078  757749 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 08:59:54.970180  757749 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 08:59:54.970209  757749 certs.go:257] generating profile certs ...
	I0111 08:59:54.970299  757749 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.key
	I0111 08:59:54.970341  757749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.crt with IP's: []
	I0111 08:59:55.111056  757749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.crt ...
	I0111 08:59:55.111094  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.crt: {Name:mk3447f8010fea84488c5d961de16a6017788675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.111305  757749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.key ...
	I0111 08:59:55.111321  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/client.key: {Name:mk3810c0261b479f915815c69b7bbb1973a449e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.111424  757749 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key.54eed94c
	I0111 08:59:55.111445  757749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt.54eed94c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0111 08:59:55.391700  757749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt.54eed94c ...
	I0111 08:59:55.391734  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt.54eed94c: {Name:mk0dfd65c00056ee70dc240b7a6870a7253530f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.391927  757749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key.54eed94c ...
	I0111 08:59:55.391941  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key.54eed94c: {Name:mk6dbd290a2c614096c20a27dabbd886954df729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.392035  757749 certs.go:382] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt.54eed94c -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt
	I0111 08:59:55.392111  757749 certs.go:386] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key.54eed94c -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key
	I0111 08:59:55.392172  757749 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key
	I0111 08:59:55.392193  757749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt with IP's: []
	I0111 08:59:55.638377  757749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt ...
	I0111 08:59:55.638409  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt: {Name:mk44fcca6096a57843d8bf5df407d624f081de1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.638601  757749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key ...
	I0111 08:59:55.638616  757749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key: {Name:mkacef72efa4354d2cd0d689112bb93f5a595040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:59:55.638704  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:59:55.638726  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:59:55.638742  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:59:55.638762  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:59:55.638773  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:59:55.638792  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:59:55.638808  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:59:55.638819  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:59:55.638878  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 08:59:55.638921  757749 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 08:59:55.638934  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:59:55.638960  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:59:55.638988  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:59:55.639016  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 08:59:55.639064  757749 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:59:55.639099  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:55.639116  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem -> /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.639127  757749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> /usr/share/ca-certificates/5769072.pem
	I0111 08:59:55.639717  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:59:55.658896  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 08:59:55.676895  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:59:55.701485  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:59:55.726251  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:59:55.748578  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:59:55.767393  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:59:55.784670  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/force-systemd-flag-630015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:59:55.802690  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:59:55.820342  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 08:59:55.838708  757749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 08:59:55.856927  757749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:59:55.869767  757749 ssh_runner.go:195] Run: openssl version
	I0111 08:59:55.876569  757749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.884166  757749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 08:59:55.891629  757749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.895300  757749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.895368  757749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 08:59:55.937500  757749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:59:55.945135  757749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/576907.pem /etc/ssl/certs/51391683.0
	I0111 08:59:55.952879  757749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 08:59:55.960325  757749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 08:59:55.968063  757749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 08:59:55.972281  757749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 08:59:55.972345  757749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 08:59:56.016108  757749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:59:56.024171  757749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5769072.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:59:56.032118  757749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:56.040182  757749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:59:56.048247  757749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:56.052181  757749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:56.052250  757749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:59:56.093684  757749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:59:56.101837  757749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:59:56.109693  757749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:59:56.113409  757749 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:59:56.113466  757749 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-630015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-630015 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:59:56.113553  757749 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:59:56.113617  757749 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:59:56.141097  757749 cri.go:96] found id: ""
	I0111 08:59:56.141175  757749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:59:56.149374  757749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:59:56.157193  757749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:59:56.157293  757749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:59:56.166142  757749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:59:56.166168  757749 kubeadm.go:158] found existing configuration files:
	
	I0111 08:59:56.166229  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:59:56.175404  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:59:56.175522  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:59:56.183174  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:59:56.190971  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:59:56.191088  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:59:56.198545  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:59:56.206394  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:59:56.206470  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:59:56.213915  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:59:56.221725  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:59:56.221795  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:59:56.229122  757749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:59:56.266737  757749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:59:56.266800  757749 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:59:56.342554  757749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:59:56.342634  757749 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:59:56.342674  757749 kubeadm.go:319] OS: Linux
	I0111 08:59:56.342724  757749 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:59:56.342777  757749 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:59:56.342828  757749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:59:56.342880  757749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:59:56.342931  757749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:59:56.342984  757749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:59:56.343033  757749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:59:56.343092  757749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:59:56.343141  757749 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:59:56.410410  757749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:59:56.410535  757749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:59:56.410633  757749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:59:56.418629  757749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:59:56.425154  757749 out.go:252]   - Generating certificates and keys ...
	I0111 08:59:56.425245  757749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:59:56.425317  757749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:59:57.057632  757749 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:59:57.566376  757749 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:59:57.802598  757749 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:59:57.922207  757749 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:59:57.989463  757749 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:59:57.989777  757749 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:59:58.119139  757749 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:59:58.119532  757749 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:59:58.190162  757749 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:59:58.411963  757749 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:59:58.636874  757749 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:59:58.637165  757749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:59:58.856965  757749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:59:59.269048  757749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:59:59.579868  757749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:59:59.746731  757749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:59:59.947813  757749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:59:59.948493  757749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:59:59.951202  757749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:59:59.954908  757749 out.go:252]   - Booting up control plane ...
	I0111 08:59:59.955011  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:59:59.955089  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:59:59.955158  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:59:59.970035  757749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:59:59.970252  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:59:59.979482  757749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:59:59.979596  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:59:59.979655  757749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:00:00.628099  757749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 09:00:00.628227  757749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 09:01:42.086216  738378 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001316385s
	I0111 09:01:42.086247  738378 kubeadm.go:319] 
	I0111 09:01:42.086306  738378 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 09:01:42.086340  738378 kubeadm.go:319] 	- The kubelet is not running
	I0111 09:01:42.086448  738378 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 09:01:42.086452  738378 kubeadm.go:319] 
	I0111 09:01:42.086568  738378 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 09:01:42.086602  738378 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 09:01:42.086633  738378 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 09:01:42.086637  738378 kubeadm.go:319] 
	I0111 09:01:42.090433  738378 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:01:42.090849  738378 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:01:42.090960  738378 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:01:42.091187  738378 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 09:01:42.091201  738378 kubeadm.go:319] 
	I0111 09:01:42.091267  738378 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 09:01:42.091329  738378 kubeadm.go:403] duration metric: took 8m6.489260005s to StartCluster
	I0111 09:01:42.091391  738378 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0111 09:01:42.091469  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 09:01:42.122461  738378 cri.go:96] found id: ""
	I0111 09:01:42.122557  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.122568  738378 logs.go:284] No container was found matching "kube-apiserver"
	I0111 09:01:42.122577  738378 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0111 09:01:42.122661  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 09:01:42.155907  738378 cri.go:96] found id: ""
	I0111 09:01:42.155938  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.155948  738378 logs.go:284] No container was found matching "etcd"
	I0111 09:01:42.155956  738378 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0111 09:01:42.156021  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 09:01:42.188652  738378 cri.go:96] found id: ""
	I0111 09:01:42.188676  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.188687  738378 logs.go:284] No container was found matching "coredns"
	I0111 09:01:42.188694  738378 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0111 09:01:42.188760  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 09:01:42.219107  738378 cri.go:96] found id: ""
	I0111 09:01:42.219132  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.219142  738378 logs.go:284] No container was found matching "kube-scheduler"
	I0111 09:01:42.219148  738378 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0111 09:01:42.219220  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 09:01:42.248491  738378 cri.go:96] found id: ""
	I0111 09:01:42.248515  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.248524  738378 logs.go:284] No container was found matching "kube-proxy"
	I0111 09:01:42.248531  738378 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 09:01:42.248599  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 09:01:42.276176  738378 cri.go:96] found id: ""
	I0111 09:01:42.276204  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.276214  738378 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 09:01:42.276221  738378 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0111 09:01:42.276305  738378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 09:01:42.305146  738378 cri.go:96] found id: ""
	I0111 09:01:42.305179  738378 logs.go:282] 0 containers: []
	W0111 09:01:42.305191  738378 logs.go:284] No container was found matching "kindnet"
	I0111 09:01:42.305203  738378 logs.go:123] Gathering logs for kubelet ...
	I0111 09:01:42.305217  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 09:01:42.373210  738378 logs.go:123] Gathering logs for dmesg ...
	I0111 09:01:42.373251  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 09:01:42.391875  738378 logs.go:123] Gathering logs for describe nodes ...
	I0111 09:01:42.391906  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 09:01:42.519857  738378 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 09:01:42.510570    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.511451    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.513051    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.513367    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.516015    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 09:01:42.510570    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.511451    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.513051    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.513367    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:42.516015    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 09:01:42.519883  738378 logs.go:123] Gathering logs for CRI-O ...
	I0111 09:01:42.519896  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0111 09:01:42.555199  738378 logs.go:123] Gathering logs for container status ...
	I0111 09:01:42.555237  738378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 09:01:42.587835  738378 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316385s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 09:01:42.587946  738378 out.go:285] * 
	W0111 09:01:42.588182  738378 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316385s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 09:01:42.588197  738378 out.go:285] * 
	W0111 09:01:42.588452  738378 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 09:01:42.594611  738378 out.go:203] 
	W0111 09:01:42.597448  738378 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316385s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 09:01:42.597510  738378 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 09:01:42.597536  738378 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 09:01:42.600575  738378 out.go:203] 
	
	
	==> CRI-O <==
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027420913Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027458017Z" level=info msg="Starting seccomp notifier watcher"
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027497657Z" level=info msg="Create NRI interface"
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027590048Z" level=info msg="built-in NRI default validator is disabled"
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027599919Z" level=info msg="runtime interface created"
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027610824Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027617224Z" level=info msg="runtime interface starting up..."
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027622885Z" level=info msg="starting plugins..."
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027635587Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 11 08:53:33 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:33.027704872Z" level=info msg="No systemd watchdog enabled"
	Jan 11 08:53:33 force-systemd-env-472282 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Jan 11 08:53:36 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:36.028790353Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=2c8778d2-1298-4f1c-b2ea-ece636aa7b2c name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:53:36 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:36.030175692Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=d1f61386-f745-47a7-8288-aff633c79d8e name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:53:36 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:36.030920802Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=b0c3581d-0e54-4c12-a5a7-bd59d049369c name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:53:36 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:36.032735423Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=f70ccabb-7bd0-4b76-9eb3-3cdf9629b5a3 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:53:36 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:36.033439556Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=59bb836f-2afa-4d44-a430-944a1490b51f name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:53:36 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:36.034208944Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=0dcff17e-d2b4-40f1-b119-25ac79af1462 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:53:36 force-systemd-env-472282 crio[842]: time="2026-01-11T08:53:36.034888166Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=4165aa44-262f-4c9c-b2e8-b61690af58b5 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:57:40 force-systemd-env-472282 crio[842]: time="2026-01-11T08:57:40.714595068Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=77d583f0-3f92-4647-b2aa-f843e911300c name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:57:40 force-systemd-env-472282 crio[842]: time="2026-01-11T08:57:40.71542209Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=62c15383-e250-4b14-9ab3-ec7fa71c3115 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:57:40 force-systemd-env-472282 crio[842]: time="2026-01-11T08:57:40.715948694Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=296f7e01-ff8c-446e-be71-fec865b24780 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:57:40 force-systemd-env-472282 crio[842]: time="2026-01-11T08:57:40.716401451Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=20c2dffe-4955-4404-9eee-773c2eaf3e6d name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:57:40 force-systemd-env-472282 crio[842]: time="2026-01-11T08:57:40.71685328Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=6d7a40da-1c94-4d04-a99a-48a28fcf04fb name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:57:40 force-systemd-env-472282 crio[842]: time="2026-01-11T08:57:40.717273077Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c5c82a83-7551-4420-a31b-0405abcb97cb name=/runtime.v1.ImageService/ImageStatus
	Jan 11 08:57:40 force-systemd-env-472282 crio[842]: time="2026-01-11T08:57:40.717783475Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=b889507c-29e5-426e-a615-b27153654be8 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 09:01:43.651823    5074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:43.652634    5074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:43.654301    5074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:43.654911    5074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 09:01:43.656532    5074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +33.770996] overlayfs: idmapped layers are currently not supported
	[Jan11 08:29] overlayfs: idmapped layers are currently not supported
	[  +3.600210] overlayfs: idmapped layers are currently not supported
	[Jan11 08:30] overlayfs: idmapped layers are currently not supported
	[Jan11 08:31] overlayfs: idmapped layers are currently not supported
	[Jan11 08:32] overlayfs: idmapped layers are currently not supported
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 09:01:43 up  3:44,  0 user,  load average: 0.37, 1.16, 1.88
	Linux force-systemd-env-472282 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 11 09:01:40 force-systemd-env-472282 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 09:01:41 force-systemd-env-472282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Jan 11 09:01:41 force-systemd-env-472282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:01:41 force-systemd-env-472282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:01:41 force-systemd-env-472282 kubelet[4877]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:01:41 force-systemd-env-472282 kubelet[4877]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:01:41 force-systemd-env-472282 kubelet[4877]: E0111 09:01:41.741983    4877 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 09:01:41 force-systemd-env-472282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 09:01:41 force-systemd-env-472282 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 09:01:42 force-systemd-env-472282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Jan 11 09:01:42 force-systemd-env-472282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:01:42 force-systemd-env-472282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:01:42 force-systemd-env-472282 kubelet[4951]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:01:42 force-systemd-env-472282 kubelet[4951]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:01:42 force-systemd-env-472282 kubelet[4951]: E0111 09:01:42.523283    4951 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 09:01:42 force-systemd-env-472282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 09:01:42 force-systemd-env-472282 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 09:01:43 force-systemd-env-472282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Jan 11 09:01:43 force-systemd-env-472282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:01:43 force-systemd-env-472282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 09:01:43 force-systemd-env-472282 kubelet[4987]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:01:43 force-systemd-env-472282 kubelet[4987]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 11 09:01:43 force-systemd-env-472282 kubelet[4987]: E0111 09:01:43.297586    4987 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 09:01:43 force-systemd-env-472282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 09:01:43 force-systemd-env-472282 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-472282 -n force-systemd-env-472282
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-472282 -n force-systemd-env-472282: exit status 6 (352.012161ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 09:01:44.107827  761594 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-472282" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-472282" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-472282" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-472282
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-472282: (1.946416463s)
--- FAIL: TestForceSystemdEnv (507.19s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.96s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-656609 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-656609 --output=json --user=testUser: exit status 80 (1.959965887s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6471a2af-95c3-4cf3-9ea9-d24111c018a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-656609 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"39ce6e49-cc3d-466c-af8a-0cd2b5ae03ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-11T08:32:46Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"c0099901-1277-43ea-a62c-e8750fdefb13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-656609 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.96s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.49s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-656609 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-656609 --output=json --user=testUser: exit status 80 (1.489906263s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a94756fd-1861-4361-9900-bea5fc2aaab4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-656609 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"4b61f5b9-6fd7-4955-987f-3151fa8b55fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-11T08:32:48Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"4a461603-4734-4e6b-bb52-eda2b436bfcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-656609 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.49s)

                                                
                                    
x
+
TestPause/serial/Pause (8.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-042270 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-042270 --alsologtostderr -v=5: exit status 80 (2.328018217s)

                                                
                                                
-- stdout --
	* Pausing node pause-042270 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:47:27.837122  713958 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:47:27.837949  713958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:47:27.838010  713958 out.go:374] Setting ErrFile to fd 2...
	I0111 08:47:27.838033  713958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:47:27.838388  713958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:47:27.838720  713958 out.go:368] Setting JSON to false
	I0111 08:47:27.838771  713958 mustload.go:66] Loading cluster: pause-042270
	I0111 08:47:27.839223  713958 config.go:182] Loaded profile config "pause-042270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:47:27.839748  713958 cli_runner.go:164] Run: docker container inspect pause-042270 --format={{.State.Status}}
	I0111 08:47:27.892332  713958 host.go:66] Checking if "pause-042270" exists ...
	I0111 08:47:27.892666  713958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:47:27.981628  713958 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2026-01-11 08:47:27.967224971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:47:27.982412  713958 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-042270 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 08:47:27.985842  713958 out.go:179] * Pausing node pause-042270 ... 
	I0111 08:47:27.988723  713958 host.go:66] Checking if "pause-042270" exists ...
	I0111 08:47:27.989050  713958 ssh_runner.go:195] Run: systemctl --version
	I0111 08:47:27.989095  713958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-042270
	I0111 08:47:28.024548  713958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33698 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/pause-042270/id_rsa Username:docker}
	I0111 08:47:28.145328  713958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:47:28.169707  713958 pause.go:52] kubelet running: true
	I0111 08:47:28.169785  713958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 08:47:28.529766  713958 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 08:47:28.529869  713958 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 08:47:28.637612  713958 cri.go:96] found id: "b061ecb176606fab39561201f0b787b9dd4d71b0fc0623474ac1a6dc66c8e2c4"
	I0111 08:47:28.637637  713958 cri.go:96] found id: "6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e"
	I0111 08:47:28.637642  713958 cri.go:96] found id: "9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892"
	I0111 08:47:28.637646  713958 cri.go:96] found id: "9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084"
	I0111 08:47:28.637650  713958 cri.go:96] found id: "d8e4dc716e9fbad51f33509cd8d8d0eb48040e799342b510f6b5274aab249c86"
	I0111 08:47:28.637654  713958 cri.go:96] found id: "4f37daff12209c7cbe5088130ea4aea7c5917b3aef9b3d2100f02d6698061862"
	I0111 08:47:28.637675  713958 cri.go:96] found id: "3386692eec9fee759f4c5f30957286e96e3ffe1d2f0d8a8509abfb8f37a2466f"
	I0111 08:47:28.637681  713958 cri.go:96] found id: "608d40b7c34b0aa005a4dc964b0820a313cca729a0940d77d4611c6f8f338715"
	I0111 08:47:28.637685  713958 cri.go:96] found id: "2ef6b516b54d3ef537d1455d60abd47dafe430aefdb427a778bb5733ef2f39a4"
	I0111 08:47:28.637698  713958 cri.go:96] found id: "0b7fcbbd82786fed4387b48391bd12d068f8935bd11d2de482be853f78820f5f"
	I0111 08:47:28.637702  713958 cri.go:96] found id: "9bc6322fbfe5b3aeb3cc28d0de46bd73d50006f6f24385238e2d536bfb5ca556"
	I0111 08:47:28.637712  713958 cri.go:96] found id: "a8599322e647ea17e3e1d9753183eca263c26a46964d0822f85fab8f2399a7fa"
	I0111 08:47:28.637716  713958 cri.go:96] found id: "9bc5caca9724764aa07f8310a52ec008336dd061840833714e8054f0bc2d4592"
	I0111 08:47:28.637719  713958 cri.go:96] found id: "4091f664f637aede6861e233180d95399d5deaeeced980fb3d4654d7fd3396f3"
	I0111 08:47:28.637722  713958 cri.go:96] found id: "c657c976d677e9e5a67a63345b859b3473205cff8345aad91fcee3a3485251a6"
	I0111 08:47:28.637727  713958 cri.go:96] found id: "043f80b8901207f1b00f2e5d8307335f58820fe7d929fc27e1c0b07106271e96"
	I0111 08:47:28.637736  713958 cri.go:96] found id: "5bd088708d4deb3e026194575f822c0860ec80b9327e4f9e76e6e0fa14fbe2f1"
	I0111 08:47:28.637753  713958 cri.go:96] found id: ""
	I0111 08:47:28.637811  713958 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:47:28.662680  713958 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:47:28Z" level=error msg="open /run/runc: no such file or directory"
	I0111 08:47:29.012253  713958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:47:29.032441  713958 pause.go:52] kubelet running: false
	I0111 08:47:29.032536  713958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 08:47:29.266474  713958 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 08:47:29.266583  713958 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 08:47:29.389207  713958 cri.go:96] found id: "b061ecb176606fab39561201f0b787b9dd4d71b0fc0623474ac1a6dc66c8e2c4"
	I0111 08:47:29.389233  713958 cri.go:96] found id: "6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e"
	I0111 08:47:29.389239  713958 cri.go:96] found id: "9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892"
	I0111 08:47:29.389243  713958 cri.go:96] found id: "9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084"
	I0111 08:47:29.389246  713958 cri.go:96] found id: "d8e4dc716e9fbad51f33509cd8d8d0eb48040e799342b510f6b5274aab249c86"
	I0111 08:47:29.389249  713958 cri.go:96] found id: "4f37daff12209c7cbe5088130ea4aea7c5917b3aef9b3d2100f02d6698061862"
	I0111 08:47:29.389252  713958 cri.go:96] found id: "3386692eec9fee759f4c5f30957286e96e3ffe1d2f0d8a8509abfb8f37a2466f"
	I0111 08:47:29.389275  713958 cri.go:96] found id: "608d40b7c34b0aa005a4dc964b0820a313cca729a0940d77d4611c6f8f338715"
	I0111 08:47:29.389284  713958 cri.go:96] found id: "2ef6b516b54d3ef537d1455d60abd47dafe430aefdb427a778bb5733ef2f39a4"
	I0111 08:47:29.389291  713958 cri.go:96] found id: "0b7fcbbd82786fed4387b48391bd12d068f8935bd11d2de482be853f78820f5f"
	I0111 08:47:29.389294  713958 cri.go:96] found id: "9bc6322fbfe5b3aeb3cc28d0de46bd73d50006f6f24385238e2d536bfb5ca556"
	I0111 08:47:29.389298  713958 cri.go:96] found id: "a8599322e647ea17e3e1d9753183eca263c26a46964d0822f85fab8f2399a7fa"
	I0111 08:47:29.389310  713958 cri.go:96] found id: "9bc5caca9724764aa07f8310a52ec008336dd061840833714e8054f0bc2d4592"
	I0111 08:47:29.389313  713958 cri.go:96] found id: "4091f664f637aede6861e233180d95399d5deaeeced980fb3d4654d7fd3396f3"
	I0111 08:47:29.389316  713958 cri.go:96] found id: "c657c976d677e9e5a67a63345b859b3473205cff8345aad91fcee3a3485251a6"
	I0111 08:47:29.389327  713958 cri.go:96] found id: "043f80b8901207f1b00f2e5d8307335f58820fe7d929fc27e1c0b07106271e96"
	I0111 08:47:29.389335  713958 cri.go:96] found id: "5bd088708d4deb3e026194575f822c0860ec80b9327e4f9e76e6e0fa14fbe2f1"
	I0111 08:47:29.389362  713958 cri.go:96] found id: ""
	I0111 08:47:29.389440  713958 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:47:29.670565  713958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:47:29.688863  713958 pause.go:52] kubelet running: false
	I0111 08:47:29.688956  713958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 08:47:29.878806  713958 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 08:47:29.878924  713958 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 08:47:29.989630  713958 cri.go:96] found id: "b061ecb176606fab39561201f0b787b9dd4d71b0fc0623474ac1a6dc66c8e2c4"
	I0111 08:47:29.989675  713958 cri.go:96] found id: "6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e"
	I0111 08:47:29.989681  713958 cri.go:96] found id: "9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892"
	I0111 08:47:29.989685  713958 cri.go:96] found id: "9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084"
	I0111 08:47:29.989690  713958 cri.go:96] found id: "d8e4dc716e9fbad51f33509cd8d8d0eb48040e799342b510f6b5274aab249c86"
	I0111 08:47:29.989694  713958 cri.go:96] found id: "4f37daff12209c7cbe5088130ea4aea7c5917b3aef9b3d2100f02d6698061862"
	I0111 08:47:29.989723  713958 cri.go:96] found id: "3386692eec9fee759f4c5f30957286e96e3ffe1d2f0d8a8509abfb8f37a2466f"
	I0111 08:47:29.989733  713958 cri.go:96] found id: "608d40b7c34b0aa005a4dc964b0820a313cca729a0940d77d4611c6f8f338715"
	I0111 08:47:29.989744  713958 cri.go:96] found id: "2ef6b516b54d3ef537d1455d60abd47dafe430aefdb427a778bb5733ef2f39a4"
	I0111 08:47:29.989756  713958 cri.go:96] found id: "0b7fcbbd82786fed4387b48391bd12d068f8935bd11d2de482be853f78820f5f"
	I0111 08:47:29.989761  713958 cri.go:96] found id: "9bc6322fbfe5b3aeb3cc28d0de46bd73d50006f6f24385238e2d536bfb5ca556"
	I0111 08:47:29.989764  713958 cri.go:96] found id: "a8599322e647ea17e3e1d9753183eca263c26a46964d0822f85fab8f2399a7fa"
	I0111 08:47:29.989767  713958 cri.go:96] found id: "9bc5caca9724764aa07f8310a52ec008336dd061840833714e8054f0bc2d4592"
	I0111 08:47:29.989770  713958 cri.go:96] found id: "4091f664f637aede6861e233180d95399d5deaeeced980fb3d4654d7fd3396f3"
	I0111 08:47:29.989774  713958 cri.go:96] found id: "c657c976d677e9e5a67a63345b859b3473205cff8345aad91fcee3a3485251a6"
	I0111 08:47:29.989801  713958 cri.go:96] found id: "043f80b8901207f1b00f2e5d8307335f58820fe7d929fc27e1c0b07106271e96"
	I0111 08:47:29.989820  713958 cri.go:96] found id: "5bd088708d4deb3e026194575f822c0860ec80b9327e4f9e76e6e0fa14fbe2f1"
	I0111 08:47:29.989825  713958 cri.go:96] found id: ""
	I0111 08:47:29.989892  713958 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 08:47:30.007284  713958 out.go:203] 
	W0111 08:47:30.022534  713958 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:47:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:47:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 08:47:30.022569  713958 out.go:285] * 
	* 
	W0111 08:47:30.070732  713958 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:47:30.076027  713958 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-042270 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-042270
helpers_test.go:244: (dbg) docker inspect pause-042270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4",
	        "Created": "2026-01-11T08:44:39.089747168Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 701189,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T08:44:39.26172845Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4/hostname",
	        "HostsPath": "/var/lib/docker/containers/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4/hosts",
	        "LogPath": "/var/lib/docker/containers/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4-json.log",
	        "Name": "/pause-042270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-042270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-042270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4",
	                "LowerDir": "/var/lib/docker/overlay2/145a0059f90af945ada96f805e1d8fcd8809c3f69b4c236e4cc6db6090ea0ff7-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/145a0059f90af945ada96f805e1d8fcd8809c3f69b4c236e4cc6db6090ea0ff7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/145a0059f90af945ada96f805e1d8fcd8809c3f69b4c236e4cc6db6090ea0ff7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/145a0059f90af945ada96f805e1d8fcd8809c3f69b4c236e4cc6db6090ea0ff7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-042270",
	                "Source": "/var/lib/docker/volumes/pause-042270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-042270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-042270",
	                "name.minikube.sigs.k8s.io": "pause-042270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f40d843f4a1f982b7ecd90ecd0abae6c6226f2c27a5013548c8e7983f087b85",
	            "SandboxKey": "/var/run/docker/netns/3f40d843f4a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33698"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33699"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33702"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33700"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33701"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-042270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:6b:b2:fa:ac:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70b55988d363dc5beae59a4f2c0270f01d6c8f47c86a4e8f237248f42184fb91",
	                    "EndpointID": "ec397850e897e14f151ae4a76b88a93261b626959ea9cff59667b64c859cce6c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-042270",
	                        "4561dea88724"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-042270 -n pause-042270
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-042270 -n pause-042270: exit status 2 (482.766816ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-042270 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-042270 logs -n 25: (1.923343211s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p multinode-869861-m03                                                                                                                  │ multinode-869861-m03        │ jenkins │ v1.37.0 │ 11 Jan 26 08:42 UTC │ 11 Jan 26 08:42 UTC │
	│ delete  │ -p multinode-869861                                                                                                                      │ multinode-869861            │ jenkins │ v1.37.0 │ 11 Jan 26 08:42 UTC │ 11 Jan 26 08:42 UTC │
	│ start   │ -p scheduled-stop-415795 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:42 UTC │ 11 Jan 26 08:43 UTC │
	│ stop    │ -p scheduled-stop-415795 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --cancel-scheduled                                                                                              │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │ 11 Jan 26 08:43 UTC │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │ 11 Jan 26 08:43 UTC │
	│ delete  │ -p scheduled-stop-415795                                                                                                                 │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:44 UTC │ 11 Jan 26 08:44 UTC │
	│ start   │ -p insufficient-storage-616205 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-616205 │ jenkins │ v1.37.0 │ 11 Jan 26 08:44 UTC │                     │
	│ delete  │ -p insufficient-storage-616205                                                                                                           │ insufficient-storage-616205 │ jenkins │ v1.37.0 │ 11 Jan 26 08:44 UTC │ 11 Jan 26 08:44 UTC │
	│ start   │ -p pause-042270 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-042270                │ jenkins │ v1.37.0 │ 11 Jan 26 08:44 UTC │ 11 Jan 26 08:45 UTC │
	│ start   │ -p missing-upgrade-819079 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-819079      │ jenkins │ v1.35.0 │ 11 Jan 26 08:44 UTC │ 11 Jan 26 08:45 UTC │
	│ start   │ -p pause-042270 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-042270                │ jenkins │ v1.37.0 │ 11 Jan 26 08:45 UTC │ 11 Jan 26 08:47 UTC │
	│ start   │ -p missing-upgrade-819079 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-819079      │ jenkins │ v1.37.0 │ 11 Jan 26 08:45 UTC │ 11 Jan 26 08:46 UTC │
	│ delete  │ -p missing-upgrade-819079                                                                                                                │ missing-upgrade-819079      │ jenkins │ v1.37.0 │ 11 Jan 26 08:46 UTC │ 11 Jan 26 08:46 UTC │
	│ start   │ -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-102854   │ jenkins │ v1.37.0 │ 11 Jan 26 08:46 UTC │ 11 Jan 26 08:46 UTC │
	│ stop    │ -p kubernetes-upgrade-102854 --alsologtostderr                                                                                           │ kubernetes-upgrade-102854   │ jenkins │ v1.37.0 │ 11 Jan 26 08:46 UTC │ 11 Jan 26 08:46 UTC │
	│ start   │ -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-102854   │ jenkins │ v1.37.0 │ 11 Jan 26 08:46 UTC │                     │
	│ pause   │ -p pause-042270 --alsologtostderr -v=5                                                                                                   │ pause-042270                │ jenkins │ v1.37.0 │ 11 Jan 26 08:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:46:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:46:59.900853  711632 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:46:59.901254  711632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:46:59.901268  711632 out.go:374] Setting ErrFile to fd 2...
	I0111 08:46:59.901276  711632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:46:59.902005  711632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:46:59.902644  711632 out.go:368] Setting JSON to false
	I0111 08:46:59.903604  711632 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12570,"bootTime":1768108650,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:46:59.903804  711632 start.go:143] virtualization:  
	I0111 08:46:59.906779  711632 out.go:179] * [kubernetes-upgrade-102854] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:46:59.909137  711632 notify.go:221] Checking for updates...
	I0111 08:46:59.909697  711632 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:46:59.912952  711632 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:46:59.915858  711632 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:46:59.918746  711632 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:46:59.921537  711632 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:46:59.924458  711632 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:46:59.927856  711632 config.go:182] Loaded profile config "kubernetes-upgrade-102854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 08:46:59.928463  711632 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:46:59.955361  711632 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:46:59.955481  711632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:47:00.067955  711632 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:47:00.032988496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:47:00.068083  711632 docker.go:319] overlay module found
	I0111 08:47:00.074106  711632 out.go:179] * Using the docker driver based on existing profile
	I0111 08:47:00.077194  711632 start.go:309] selected driver: docker
	I0111 08:47:00.077217  711632 start.go:928] validating driver "docker" against &{Name:kubernetes-upgrade-102854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:47:00.077320  711632 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:47:00.078224  711632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:47:00.257598  711632 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:47:00.235416306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:47:00.258000  711632 cni.go:84] Creating CNI manager for ""
	I0111 08:47:00.258065  711632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:47:00.258108  711632 start.go:353] cluster config:
	{Name:kubernetes-upgrade-102854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:47:00.261629  711632 out.go:179] * Starting "kubernetes-upgrade-102854" primary control-plane node in "kubernetes-upgrade-102854" cluster
	I0111 08:47:00.266873  711632 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 08:47:00.271317  711632 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:47:00.274823  711632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:47:00.274826  711632 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:47:00.274913  711632 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 08:47:00.274933  711632 cache.go:65] Caching tarball of preloaded images
	I0111 08:47:00.275030  711632 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 08:47:00.275039  711632 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 08:47:00.275158  711632 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/config.json ...
	I0111 08:47:00.314397  711632 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:47:00.314423  711632 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:47:00.314441  711632 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:47:00.314477  711632 start.go:360] acquireMachinesLock for kubernetes-upgrade-102854: {Name:mka28b58380642840c174fda94f450ba2ccc60e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:47:00.314553  711632 start.go:364] duration metric: took 57.42µs to acquireMachinesLock for "kubernetes-upgrade-102854"
	I0111 08:47:00.314577  711632 start.go:96] Skipping create...Using existing machine configuration
	I0111 08:47:00.314583  711632 fix.go:54] fixHost starting: 
	I0111 08:47:00.314881  711632 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-102854 --format={{.State.Status}}
	I0111 08:47:00.346431  711632 fix.go:112] recreateIfNeeded on kubernetes-upgrade-102854: state=Stopped err=<nil>
	W0111 08:47:00.346496  711632 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 08:47:00.350577  711632 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-102854" ...
	I0111 08:47:00.350736  711632 cli_runner.go:164] Run: docker start kubernetes-upgrade-102854
	I0111 08:47:00.660117  711632 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-102854 --format={{.State.Status}}
	I0111 08:47:00.684742  711632 kic.go:430] container "kubernetes-upgrade-102854" state is running.
	I0111 08:47:00.685136  711632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-102854
	I0111 08:47:00.708208  711632 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/config.json ...
	I0111 08:47:00.708659  711632 machine.go:94] provisionDockerMachine start ...
	I0111 08:47:00.708751  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:00.732808  711632 main.go:144] libmachine: Using SSH client type: native
	I0111 08:47:00.733142  711632 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33718 <nil> <nil>}
	I0111 08:47:00.733151  711632 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:47:00.734590  711632 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52560->127.0.0.1:33718: read: connection reset by peer
	I0111 08:47:03.881909  711632 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-102854
	
	I0111 08:47:03.881953  711632 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-102854"
	I0111 08:47:03.882045  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:03.901181  711632 main.go:144] libmachine: Using SSH client type: native
	I0111 08:47:03.901491  711632 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33718 <nil> <nil>}
	I0111 08:47:03.901503  711632 main.go:144] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-102854 && echo "kubernetes-upgrade-102854" | sudo tee /etc/hostname
	I0111 08:47:04.063383  711632 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-102854
	
	I0111 08:47:04.063459  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:04.081363  711632 main.go:144] libmachine: Using SSH client type: native
	I0111 08:47:04.081659  711632 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33718 <nil> <nil>}
	I0111 08:47:04.081675  711632 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-102854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-102854/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-102854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:47:04.230356  711632 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:47:04.230381  711632 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 08:47:04.230419  711632 ubuntu.go:190] setting up certificates
	I0111 08:47:04.230428  711632 provision.go:84] configureAuth start
	I0111 08:47:04.230495  711632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-102854
	I0111 08:47:04.257408  711632 provision.go:143] copyHostCerts
	I0111 08:47:04.257476  711632 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 08:47:04.257494  711632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 08:47:04.257573  711632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 08:47:04.257671  711632 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 08:47:04.257681  711632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 08:47:04.257708  711632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 08:47:04.257770  711632 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 08:47:04.257780  711632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 08:47:04.257804  711632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 08:47:04.257855  711632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-102854 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-102854 localhost minikube]
	I0111 08:47:04.327278  711632 provision.go:177] copyRemoteCerts
	I0111 08:47:04.327349  711632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:47:04.327401  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:04.344374  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:04.451244  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0111 08:47:04.470583  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 08:47:04.489628  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:47:04.507925  711632 provision.go:87] duration metric: took 277.472791ms to configureAuth
	I0111 08:47:04.508009  711632 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:47:04.508222  711632 config.go:182] Loaded profile config "kubernetes-upgrade-102854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:47:04.508340  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:04.526224  711632 main.go:144] libmachine: Using SSH client type: native
	I0111 08:47:04.526538  711632 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33718 <nil> <nil>}
	I0111 08:47:04.526560  711632 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 08:47:04.846041  711632 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 08:47:04.846063  711632 machine.go:97] duration metric: took 4.137390995s to provisionDockerMachine
	I0111 08:47:04.846075  711632 start.go:293] postStartSetup for "kubernetes-upgrade-102854" (driver="docker")
	I0111 08:47:04.846087  711632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:47:04.846177  711632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:47:04.846221  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:04.866752  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:04.969748  711632 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:47:04.973050  711632 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:47:04.973080  711632 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:47:04.973092  711632 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 08:47:04.973149  711632 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 08:47:04.973233  711632 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 08:47:04.973341  711632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:47:04.980875  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:47:04.998472  711632 start.go:296] duration metric: took 152.382002ms for postStartSetup
	I0111 08:47:04.998670  711632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:47:04.998723  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:05.018171  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:05.119555  711632 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:47:05.124995  711632 fix.go:56] duration metric: took 4.810405116s for fixHost
	I0111 08:47:05.125023  711632 start.go:83] releasing machines lock for "kubernetes-upgrade-102854", held for 4.810460707s
	I0111 08:47:05.125101  711632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-102854
	I0111 08:47:05.142900  711632 ssh_runner.go:195] Run: cat /version.json
	I0111 08:47:05.142920  711632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:47:05.142952  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:05.142985  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:05.167888  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:05.174341  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:05.270005  711632 ssh_runner.go:195] Run: systemctl --version
	I0111 08:47:05.370735  711632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 08:47:05.409356  711632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:47:05.413964  711632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:47:05.414040  711632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:47:05.422326  711632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 08:47:05.422353  711632 start.go:496] detecting cgroup driver to use...
	I0111 08:47:05.422386  711632 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 08:47:05.422436  711632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:47:05.438193  711632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:47:05.451600  711632 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:47:05.451669  711632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:47:05.467317  711632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:47:05.480741  711632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:47:05.598814  711632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:47:05.714703  711632 docker.go:234] disabling docker service ...
	I0111 08:47:05.714804  711632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:47:05.731280  711632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:47:05.744962  711632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:47:05.851763  711632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:47:05.968857  711632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:47:05.981598  711632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:47:05.997806  711632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 08:47:05.997874  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.013613  711632 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 08:47:06.013695  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.025391  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.036050  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.046108  711632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:47:06.055563  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.064815  711632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.074113  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.083362  711632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:47:06.090970  711632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:47:06.098776  711632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:47:06.214159  711632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 08:47:06.418146  711632 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 08:47:06.418229  711632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 08:47:06.422263  711632 start.go:574] Will wait 60s for crictl version
	I0111 08:47:06.422338  711632 ssh_runner.go:195] Run: which crictl
	I0111 08:47:06.425837  711632 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:47:06.451140  711632 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 08:47:06.451234  711632 ssh_runner.go:195] Run: crio --version
	I0111 08:47:06.479033  711632 ssh_runner.go:195] Run: crio --version
	I0111 08:47:06.511532  711632 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 08:47:07.149718  705342 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.535080118s)
	I0111 08:47:07.149745  705342 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 08:47:07.149797  705342 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 08:47:07.155884  705342 start.go:574] Will wait 60s for crictl version
	I0111 08:47:07.155949  705342 ssh_runner.go:195] Run: which crictl
	I0111 08:47:07.167644  705342 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:47:07.206840  705342 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 08:47:07.206927  705342 ssh_runner.go:195] Run: crio --version
	I0111 08:47:07.253381  705342 ssh_runner.go:195] Run: crio --version
	I0111 08:47:07.297530  705342 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 08:47:07.300535  705342 cli_runner.go:164] Run: docker network inspect pause-042270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:47:07.318546  705342 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 08:47:07.323192  705342 kubeadm.go:884] updating cluster {Name:pause-042270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-042270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:47:07.323366  705342 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:47:07.323433  705342 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:47:07.376125  705342 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:47:07.376151  705342 crio.go:433] Images already preloaded, skipping extraction
	I0111 08:47:07.376208  705342 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:47:07.416160  705342 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:47:07.416186  705342 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:47:07.416195  705342 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0111 08:47:07.416292  705342 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-042270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-042270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:47:07.416375  705342 ssh_runner.go:195] Run: crio config
	I0111 08:47:07.492884  705342 cni.go:84] Creating CNI manager for ""
	I0111 08:47:07.492909  705342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:47:07.492932  705342 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:47:07.492954  705342 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-042270 NodeName:pause-042270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:47:07.493087  705342 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-042270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:47:07.493166  705342 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:47:07.506662  705342 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:47:07.506766  705342 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:47:07.516730  705342 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0111 08:47:07.532885  705342 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:47:07.551069  705342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0111 08:47:07.566749  705342 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:47:07.571129  705342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:47:07.769273  705342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:47:07.794856  705342 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270 for IP: 192.168.76.2
	I0111 08:47:07.794879  705342 certs.go:195] generating shared ca certs ...
	I0111 08:47:07.794898  705342 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:07.795070  705342 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 08:47:07.795118  705342 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 08:47:07.795130  705342 certs.go:257] generating profile certs ...
	I0111 08:47:07.795252  705342 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.key
	I0111 08:47:07.795333  705342 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/apiserver.key.b14d61a9
	I0111 08:47:07.795419  705342 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/proxy-client.key
	I0111 08:47:07.795548  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 08:47:07.795596  705342 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 08:47:07.795609  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:47:07.795635  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:47:07.795662  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:47:07.795698  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 08:47:07.795763  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:47:07.796461  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:47:07.821453  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 08:47:07.842689  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:47:07.866019  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:47:07.889080  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0111 08:47:07.911240  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:47:07.932805  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:47:07.955112  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:47:07.977148  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 08:47:07.998790  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:47:08.021965  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 08:47:08.044381  705342 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:47:08.060719  705342 ssh_runner.go:195] Run: openssl version
	I0111 08:47:08.068231  705342 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 08:47:08.076894  705342 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 08:47:08.085582  705342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 08:47:08.089925  705342 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 08:47:08.090002  705342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 08:47:08.132329  705342 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:47:08.141129  705342 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 08:47:08.149604  705342 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 08:47:08.158266  705342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 08:47:08.162668  705342 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 08:47:08.162752  705342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 08:47:08.207327  705342 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:47:08.215949  705342 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:08.224205  705342 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:47:08.232626  705342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:08.237144  705342 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:08.237218  705342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:08.279593  705342 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:47:08.288292  705342 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:47:08.292819  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 08:47:08.342291  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 08:47:08.387439  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 08:47:08.431089  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 08:47:08.476263  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 08:47:08.519574  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 08:47:08.571393  705342 kubeadm.go:401] StartCluster: {Name:pause-042270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-042270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:47:08.571509  705342 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:47:08.571572  705342 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:47:08.604139  705342 cri.go:96] found id: "608d40b7c34b0aa005a4dc964b0820a313cca729a0940d77d4611c6f8f338715"
	I0111 08:47:08.604168  705342 cri.go:96] found id: "2ef6b516b54d3ef537d1455d60abd47dafe430aefdb427a778bb5733ef2f39a4"
	I0111 08:47:08.604173  705342 cri.go:96] found id: "0b7fcbbd82786fed4387b48391bd12d068f8935bd11d2de482be853f78820f5f"
	I0111 08:47:08.604176  705342 cri.go:96] found id: "9bc6322fbfe5b3aeb3cc28d0de46bd73d50006f6f24385238e2d536bfb5ca556"
	I0111 08:47:08.604179  705342 cri.go:96] found id: "a8599322e647ea17e3e1d9753183eca263c26a46964d0822f85fab8f2399a7fa"
	I0111 08:47:08.604183  705342 cri.go:96] found id: "9bc5caca9724764aa07f8310a52ec008336dd061840833714e8054f0bc2d4592"
	I0111 08:47:08.604185  705342 cri.go:96] found id: "4091f664f637aede6861e233180d95399d5deaeeced980fb3d4654d7fd3396f3"
	I0111 08:47:08.604188  705342 cri.go:96] found id: "c657c976d677e9e5a67a63345b859b3473205cff8345aad91fcee3a3485251a6"
	I0111 08:47:08.604192  705342 cri.go:96] found id: "e4f927ff762a02e598b216fe9c75e5e7250c2463356b93b401329e89a8fd483d"
	I0111 08:47:08.604199  705342 cri.go:96] found id: "043f80b8901207f1b00f2e5d8307335f58820fe7d929fc27e1c0b07106271e96"
	I0111 08:47:08.604202  705342 cri.go:96] found id: "7df702dfc7823702c012152170a5970a42c963801b47788b5c53f0a68a4b5b0a"
	I0111 08:47:08.604205  705342 cri.go:96] found id: "5bd088708d4deb3e026194575f822c0860ec80b9327e4f9e76e6e0fa14fbe2f1"
	I0111 08:47:08.604223  705342 cri.go:96] found id: ""
	I0111 08:47:08.604275  705342 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 08:47:08.617811  705342 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:47:08Z" level=error msg="open /run/runc: no such file or directory"
	I0111 08:47:08.617879  705342 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:47:08.626017  705342 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 08:47:08.626041  705342 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 08:47:08.626106  705342 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 08:47:08.633515  705342 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 08:47:08.639231  705342 kubeconfig.go:125] found "pause-042270" server: "https://192.168.76.2:8443"
	I0111 08:47:08.640063  705342 kapi.go:59] client config for pause-042270: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.key", CAFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 08:47:08.640614  705342 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0111 08:47:08.640640  705342 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0111 08:47:08.640652  705342 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0111 08:47:08.640658  705342 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0111 08:47:08.640663  705342 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0111 08:47:08.640668  705342 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0111 08:47:08.640982  705342 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 08:47:08.668764  705342 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0111 08:47:08.668801  705342 kubeadm.go:602] duration metric: took 42.753789ms to restartPrimaryControlPlane
	I0111 08:47:08.668812  705342 kubeadm.go:403] duration metric: took 97.427729ms to StartCluster
	I0111 08:47:08.668830  705342 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:08.668899  705342 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:47:08.669540  705342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:08.669738  705342 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 08:47:08.670075  705342 config.go:182] Loaded profile config "pause-042270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:47:08.670155  705342 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 08:47:08.675753  705342 out.go:179] * Enabled addons: 
	I0111 08:47:08.675815  705342 out.go:179] * Verifying Kubernetes components...
	I0111 08:47:06.514484  711632 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-102854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:47:06.531299  711632 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 08:47:06.535265  711632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:47:06.545120  711632 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-102854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:47:06.545231  711632 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:47:06.545285  711632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:47:06.579703  711632 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I0111 08:47:06.579772  711632 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
	I0111 08:47:06.608312  711632 crio.go:450] Found 9 existing images, backing up...
	I0111 08:47:06.608398  711632 ssh_runner.go:195] Run: mktemp -d
	I0111 08:47:06.613776  711632 crio.go:290] Saving image docker.io/kindest/kindnetd:v20230511-dc714da8: /tmp/tmp.6IdUP1cGGz/b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79.tar
	I0111 08:47:06.613851  711632 ssh_runner.go:195] Run: sudo podman save docker.io/kindest/kindnetd:v20230511-dc714da8 -o /tmp/tmp.6IdUP1cGGz/b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79.tar
	I0111 08:47:07.027888  711632 crio.go:290] Saving image gcr.io/k8s-minikube/storage-provisioner:v5: /tmp/tmp.6IdUP1cGGz/ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6.tar
	I0111 08:47:07.027971  711632 ssh_runner.go:195] Run: sudo podman save gcr.io/k8s-minikube/storage-provisioner:v5 -o /tmp/tmp.6IdUP1cGGz/ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6.tar
	I0111 08:47:07.219171  711632 crio.go:290] Saving image registry.k8s.io/coredns/coredns:v1.10.1: /tmp/tmp.6IdUP1cGGz/97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108.tar
	I0111 08:47:07.219261  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/coredns/coredns:v1.10.1 -o /tmp/tmp.6IdUP1cGGz/97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108.tar
	I0111 08:47:07.525467  711632 crio.go:290] Saving image registry.k8s.io/etcd:3.5.9-0: /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar
	I0111 08:47:07.525544  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/etcd:3.5.9-0 -o /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar
	I0111 08:47:08.534454  711632 ssh_runner.go:235] Completed: sudo podman save registry.k8s.io/etcd:3.5.9-0 -o /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar: (1.008889787s)
	I0111 08:47:08.534482  711632 crio.go:290] Saving image registry.k8s.io/kube-apiserver:v1.28.0: /tmp/tmp.6IdUP1cGGz/00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766.tar
	I0111 08:47:08.534533  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/kube-apiserver:v1.28.0 -o /tmp/tmp.6IdUP1cGGz/00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766.tar
	I0111 08:47:09.407816  711632 crio.go:290] Saving image registry.k8s.io/kube-controller-manager:v1.28.0: /tmp/tmp.6IdUP1cGGz/46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5.tar
	I0111 08:47:09.407879  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/kube-controller-manager:v1.28.0 -o /tmp/tmp.6IdUP1cGGz/46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5.tar
	I0111 08:47:08.678953  705342 addons.go:530] duration metric: took 8.81957ms for enable addons: enabled=[]
	I0111 08:47:08.679055  705342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:47:09.030336  705342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:47:09.082685  705342 node_ready.go:35] waiting up to 6m0s for node "pause-042270" to be "Ready" ...
	I0111 08:47:10.364183  711632 crio.go:290] Saving image registry.k8s.io/kube-proxy:v1.28.0: /tmp/tmp.6IdUP1cGGz/940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d.tar
	I0111 08:47:10.364273  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/kube-proxy:v1.28.0 -o /tmp/tmp.6IdUP1cGGz/940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d.tar
	I0111 08:47:10.947414  711632 crio.go:290] Saving image registry.k8s.io/kube-scheduler:v1.28.0: /tmp/tmp.6IdUP1cGGz/762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee.tar
	I0111 08:47:10.947495  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/kube-scheduler:v1.28.0 -o /tmp/tmp.6IdUP1cGGz/762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee.tar
	I0111 08:47:11.549336  711632 crio.go:290] Saving image registry.k8s.io/pause:3.9: /tmp/tmp.6IdUP1cGGz/829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e.tar
	I0111 08:47:11.549404  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/pause:3.9 -o /tmp/tmp.6IdUP1cGGz/829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e.tar
	I0111 08:47:11.632309  711632 ssh_runner.go:195] Run: which lz4
	I0111 08:47:11.635980  711632 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0111 08:47:11.640063  711632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0111 08:47:11.640097  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306152852 bytes)
	I0111 08:47:13.635070  711632 crio.go:496] duration metric: took 1.999121615s to copy over tarball
	I0111 08:47:13.635192  711632 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0111 08:47:14.959750  705342 node_ready.go:49] node "pause-042270" is "Ready"
	I0111 08:47:14.959781  705342 node_ready.go:38] duration metric: took 5.877057511s for node "pause-042270" to be "Ready" ...
	I0111 08:47:14.959796  705342 api_server.go:52] waiting for apiserver process to appear ...
	I0111 08:47:14.959861  705342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:47:15.041141  705342 api_server.go:72] duration metric: took 6.371362572s to wait for apiserver process to appear ...
	I0111 08:47:15.041169  705342 api_server.go:88] waiting for apiserver healthz status ...
	I0111 08:47:15.041193  705342 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 08:47:15.177698  705342 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 08:47:15.177730  705342 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 08:47:15.542073  705342 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 08:47:15.562336  705342 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 08:47:15.562365  705342 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 08:47:16.041515  705342 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 08:47:16.054622  705342 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 08:47:16.054669  705342 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 08:47:16.541247  705342 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 08:47:16.550796  705342 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0111 08:47:16.552012  705342 api_server.go:141] control plane version: v1.35.0
	I0111 08:47:16.552044  705342 api_server.go:131] duration metric: took 1.510863571s to wait for apiserver health ...
	I0111 08:47:16.552085  705342 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 08:47:16.557783  705342 system_pods.go:59] 7 kube-system pods found
	I0111 08:47:16.557818  705342 system_pods.go:61] "coredns-7d764666f9-rvvbr" [b97d5e73-1b07-4f9e-afdb-f28f370a600e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:47:16.557827  705342 system_pods.go:61] "etcd-pause-042270" [f7798498-721b-4c9b-aec7-658a3bb8a17e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:47:16.557835  705342 system_pods.go:61] "kindnet-45gwk" [7a16ed15-2c49-4c4a-90a5-bc8d0439b6b0] Running
	I0111 08:47:16.557842  705342 system_pods.go:61] "kube-apiserver-pause-042270" [31eed741-7615-49d7-939e-cd2bd5220ea3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 08:47:16.557855  705342 system_pods.go:61] "kube-controller-manager-pause-042270" [9ad8453d-aab9-462a-8b5b-3a4da7e5f958] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 08:47:16.557865  705342 system_pods.go:61] "kube-proxy-bdk4s" [e4b86581-45ce-4c68-b7d0-c1a7f3ef088f] Running
	I0111 08:47:16.557872  705342 system_pods.go:61] "kube-scheduler-pause-042270" [338504f7-7c37-42b1-a7bd-d1bd5f08794c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 08:47:16.557880  705342 system_pods.go:74] duration metric: took 5.789201ms to wait for pod list to return data ...
	I0111 08:47:16.557892  705342 default_sa.go:34] waiting for default service account to be created ...
	I0111 08:47:16.561089  705342 default_sa.go:45] found service account: "default"
	I0111 08:47:16.561117  705342 default_sa.go:55] duration metric: took 3.218326ms for default service account to be created ...
	I0111 08:47:16.561130  705342 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 08:47:16.564342  705342 system_pods.go:86] 7 kube-system pods found
	I0111 08:47:16.564381  705342 system_pods.go:89] "coredns-7d764666f9-rvvbr" [b97d5e73-1b07-4f9e-afdb-f28f370a600e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:47:16.564391  705342 system_pods.go:89] "etcd-pause-042270" [f7798498-721b-4c9b-aec7-658a3bb8a17e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:47:16.564397  705342 system_pods.go:89] "kindnet-45gwk" [7a16ed15-2c49-4c4a-90a5-bc8d0439b6b0] Running
	I0111 08:47:16.564405  705342 system_pods.go:89] "kube-apiserver-pause-042270" [31eed741-7615-49d7-939e-cd2bd5220ea3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 08:47:16.564412  705342 system_pods.go:89] "kube-controller-manager-pause-042270" [9ad8453d-aab9-462a-8b5b-3a4da7e5f958] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 08:47:16.564418  705342 system_pods.go:89] "kube-proxy-bdk4s" [e4b86581-45ce-4c68-b7d0-c1a7f3ef088f] Running
	I0111 08:47:16.564430  705342 system_pods.go:89] "kube-scheduler-pause-042270" [338504f7-7c37-42b1-a7bd-d1bd5f08794c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 08:47:16.564439  705342 system_pods.go:126] duration metric: took 3.302988ms to wait for k8s-apps to be running ...
	I0111 08:47:16.564452  705342 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 08:47:16.564508  705342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:47:16.580135  705342 system_svc.go:56] duration metric: took 15.674189ms WaitForService to wait for kubelet
	I0111 08:47:16.580168  705342 kubeadm.go:587] duration metric: took 7.910397238s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 08:47:16.580187  705342 node_conditions.go:102] verifying NodePressure condition ...
	I0111 08:47:16.583398  705342 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 08:47:16.583439  705342 node_conditions.go:123] node cpu capacity is 2
	I0111 08:47:16.583453  705342 node_conditions.go:105] duration metric: took 3.260854ms to run NodePressure ...
	I0111 08:47:16.583467  705342 start.go:242] waiting for startup goroutines ...
	I0111 08:47:16.583474  705342 start.go:247] waiting for cluster config update ...
	I0111 08:47:16.583483  705342 start.go:256] writing updated cluster config ...
	I0111 08:47:16.583797  705342 ssh_runner.go:195] Run: rm -f paused
	I0111 08:47:16.587768  705342 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 08:47:16.588381  705342 kapi.go:59] client config for pause-042270: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.key", CAFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 08:47:16.592013  705342 pod_ready.go:83] waiting for pod "coredns-7d764666f9-rvvbr" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:16.518804  711632 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.883561924s)
	I0111 08:47:16.518831  711632 crio.go:503] duration metric: took 2.883693759s to extract the tarball
	I0111 08:47:16.518839  711632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0111 08:47:16.557050  711632 crio.go:511] Restoring backed up images...
	I0111 08:47:16.557071  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79.tar
	I0111 08:47:16.557140  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79.tar
	I0111 08:47:17.497976  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6.tar
	I0111 08:47:17.498043  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6.tar
	I0111 08:47:17.625977  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108.tar
	I0111 08:47:17.626046  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108.tar
	I0111 08:47:18.382088  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar
	I0111 08:47:18.382184  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar
	W0111 08:47:18.602703  705342 pod_ready.go:104] pod "coredns-7d764666f9-rvvbr" is not "Ready", error: <nil>
	W0111 08:47:21.098729  705342 pod_ready.go:104] pod "coredns-7d764666f9-rvvbr" is not "Ready", error: <nil>
	I0111 08:47:21.597371  705342 pod_ready.go:94] pod "coredns-7d764666f9-rvvbr" is "Ready"
	I0111 08:47:21.597406  705342 pod_ready.go:86] duration metric: took 5.005325172s for pod "coredns-7d764666f9-rvvbr" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:21.600355  705342 pod_ready.go:83] waiting for pod "etcd-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:20.468944  711632 ssh_runner.go:235] Completed: sudo podman load -i /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar: (2.086731957s)
	I0111 08:47:20.468985  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766.tar
	I0111 08:47:20.469037  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766.tar
	I0111 08:47:21.361876  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5.tar
	I0111 08:47:21.361982  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5.tar
	I0111 08:47:22.275598  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d.tar
	I0111 08:47:22.275669  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d.tar
	I0111 08:47:23.086738  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee.tar
	I0111 08:47:23.086808  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee.tar
	I0111 08:47:23.588201  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e.tar
	I0111 08:47:23.588267  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e.tar
	I0111 08:47:23.741781  711632 ssh_runner.go:195] Run: rm -rf /tmp/tmp.6IdUP1cGGz
	I0111 08:47:23.826250  711632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:47:23.876375  711632 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:47:23.876400  711632 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:47:23.876409  711632 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 08:47:23.876507  711632 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-102854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:47:23.876593  711632 ssh_runner.go:195] Run: crio config
	I0111 08:47:23.939460  711632 cni.go:84] Creating CNI manager for ""
	I0111 08:47:23.939490  711632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:47:23.939513  711632 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:47:23.939536  711632 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-102854 NodeName:kubernetes-upgrade-102854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:47:23.939689  711632 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-102854"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:47:23.939770  711632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:47:23.948456  711632 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:47:23.948536  711632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:47:23.955741  711632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0111 08:47:23.969014  711632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:47:23.982040  711632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2242 bytes)
	I0111 08:47:23.994919  711632 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:47:23.998575  711632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:47:24.012228  711632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:47:24.153648  711632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:47:24.170506  711632 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854 for IP: 192.168.85.2
	I0111 08:47:24.170582  711632 certs.go:195] generating shared ca certs ...
	I0111 08:47:24.170614  711632 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:24.170797  711632 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 08:47:24.170891  711632 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 08:47:24.170917  711632 certs.go:257] generating profile certs ...
	I0111 08:47:24.171045  711632 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/client.key
	I0111 08:47:24.171165  711632 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/apiserver.key.cdcbcf04
	I0111 08:47:24.171230  711632 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/proxy-client.key
	I0111 08:47:24.171381  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 08:47:24.171447  711632 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 08:47:24.171471  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:47:24.171535  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:47:24.171591  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:47:24.171646  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 08:47:24.171742  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:47:24.172451  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:47:24.198471  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 08:47:24.225434  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:47:24.247943  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:47:24.267927  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:47:24.295185  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:47:24.321273  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:47:24.343995  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 08:47:24.361846  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:47:24.382251  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 08:47:24.401829  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 08:47:24.421712  711632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:47:24.435085  711632 ssh_runner.go:195] Run: openssl version
	I0111 08:47:24.443680  711632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 08:47:24.451933  711632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 08:47:24.460524  711632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 08:47:24.464497  711632 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 08:47:24.464613  711632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 08:47:24.505313  711632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:47:24.513215  711632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:24.520981  711632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:47:24.528686  711632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:24.532758  711632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:24.532896  711632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:24.573907  711632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:47:24.581633  711632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 08:47:24.589715  711632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 08:47:24.597460  711632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 08:47:24.601741  711632 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 08:47:24.601835  711632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 08:47:24.645605  711632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:47:24.653378  711632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:47:24.657178  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 08:47:24.698887  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 08:47:24.740441  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 08:47:24.782344  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 08:47:24.823946  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 08:47:24.866688  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	W0111 08:47:23.607308  705342 pod_ready.go:104] pod "etcd-pause-042270" is not "Ready", error: <nil>
	I0111 08:47:25.605660  705342 pod_ready.go:94] pod "etcd-pause-042270" is "Ready"
	I0111 08:47:25.605685  705342 pod_ready.go:86] duration metric: took 4.005297212s for pod "etcd-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:25.608750  705342 pod_ready.go:83] waiting for pod "kube-apiserver-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.115367  705342 pod_ready.go:94] pod "kube-apiserver-pause-042270" is "Ready"
	I0111 08:47:27.115448  705342 pod_ready.go:86] duration metric: took 1.506678286s for pod "kube-apiserver-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.117902  705342 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.123176  705342 pod_ready.go:94] pod "kube-controller-manager-pause-042270" is "Ready"
	I0111 08:47:27.123256  705342 pod_ready.go:86] duration metric: took 5.327442ms for pod "kube-controller-manager-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.125531  705342 pod_ready.go:83] waiting for pod "kube-proxy-bdk4s" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.130562  705342 pod_ready.go:94] pod "kube-proxy-bdk4s" is "Ready"
	I0111 08:47:27.130643  705342 pod_ready.go:86] duration metric: took 5.08895ms for pod "kube-proxy-bdk4s" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.203628  705342 pod_ready.go:83] waiting for pod "kube-scheduler-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.604304  705342 pod_ready.go:94] pod "kube-scheduler-pause-042270" is "Ready"
	I0111 08:47:27.604378  705342 pod_ready.go:86] duration metric: took 400.722308ms for pod "kube-scheduler-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.604412  705342 pod_ready.go:40] duration metric: took 11.016566665s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 08:47:27.699553  705342 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 08:47:27.703743  705342 out.go:203] 
	W0111 08:47:27.706853  705342 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 08:47:27.709970  705342 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 08:47:27.713097  705342 out.go:179] * Done! kubectl is now configured to use "pause-042270" cluster and "default" namespace by default
	I0111 08:47:24.908676  711632 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-102854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:47:24.908767  711632 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:47:24.908882  711632 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:47:24.937286  711632 cri.go:96] found id: ""
	I0111 08:47:24.937362  711632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:47:24.945400  711632 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 08:47:24.945421  711632 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 08:47:24.945498  711632 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 08:47:24.953247  711632 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 08:47:24.953801  711632 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-102854" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:47:24.954053  711632 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-575040/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-102854" cluster setting kubeconfig missing "kubernetes-upgrade-102854" context setting]
	I0111 08:47:24.954560  711632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:24.955231  711632 kapi.go:59] client config for kubernetes-upgrade-102854: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/client.key", CAFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 08:47:24.955796  711632 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0111 08:47:24.955819  711632 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0111 08:47:24.955825  711632 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0111 08:47:24.955830  711632 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0111 08:47:24.955834  711632 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0111 08:47:24.955838  711632 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0111 08:47:24.956096  711632 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 08:47:24.965803  711632 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2026-01-11 08:46:40.555241336 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2026-01-11 08:47:23.987850844 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-102854"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	@@ -51,6 +54,7 @@
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	 containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	+failCgroupV1: false
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I0111 08:47:24.965884  711632 kubeadm.go:1161] stopping kube-system containers ...
	I0111 08:47:24.965903  711632 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0111 08:47:24.965960  711632 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:47:24.993756  711632 cri.go:96] found id: ""
	I0111 08:47:24.993826  711632 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0111 08:47:25.017185  711632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:47:25.025869  711632 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Jan 11 08:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 11 08:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jan 11 08:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 11 08:46 /etc/kubernetes/scheduler.conf
	
	I0111 08:47:25.026026  711632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:47:25.035283  711632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:47:25.044435  711632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:47:25.053361  711632 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0111 08:47:25.053458  711632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:47:25.061401  711632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:47:25.075839  711632 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0111 08:47:25.075965  711632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:47:25.084650  711632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:47:25.093500  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:25.150060  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:26.623560  711632 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.473353571s)
	I0111 08:47:26.623674  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:26.830705  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:26.895921  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:26.973337  711632 api_server.go:52] waiting for apiserver process to appear ...
	I0111 08:47:26.973415  711632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:47:27.473621  711632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:47:27.974171  711632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:47:28.011962  711632 api_server.go:72] duration metric: took 1.038634909s to wait for apiserver process to appear ...
	I0111 08:47:28.011992  711632 api_server.go:88] waiting for apiserver healthz status ...
	I0111 08:47:28.012014  711632 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	
	
	==> CRI-O <==
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.167801736Z" level=info msg="Started container" PID=2491 containerID=b061ecb176606fab39561201f0b787b9dd4d71b0fc0623474ac1a6dc66c8e2c4 description=kube-system/kube-apiserver-pause-042270/kube-apiserver id=09163e85-071e-41ac-9ef2-805befb15cef name=/runtime.v1.RuntimeService/StartContainer sandboxID=22df88bdf853f0b25200b13cf7ac09687ecefc35f7d96a5107361a20b33f94e3
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.177380857Z" level=info msg="Created container 6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e: kube-system/etcd-pause-042270/etcd" id=a077f238-8442-414a-b0d3-1b4e23d64820 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.179465572Z" level=info msg="Starting container: 6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e" id=3a54c5b7-7d97-47fd-b7a6-40cbe8442664 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.181149173Z" level=info msg="Created container 9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084: kube-system/kindnet-45gwk/kindnet-cni" id=6e054cf0-3e9a-4fa4-9503-db4797f7c217 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.1819663Z" level=info msg="Starting container: 9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084" id=80f728cd-6842-4a98-b833-d9404fa6a275 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.183615677Z" level=info msg="Started container" PID=2499 containerID=6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e description=kube-system/etcd-pause-042270/etcd id=3a54c5b7-7d97-47fd-b7a6-40cbe8442664 name=/runtime.v1.RuntimeService/StartContainer sandboxID=886cb95c86b817ca65def38e7019d79e42c9891eccebb8a04ec6508e8c786373
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.189705395Z" level=info msg="Started container" PID=2485 containerID=9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084 description=kube-system/kindnet-45gwk/kindnet-cni id=80f728cd-6842-4a98-b833-d9404fa6a275 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac580480f27494fa1d617f5ea0edddc8144e5061b7e157d10a79a511cd7b9518
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.49853287Z" level=info msg="Created container 9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892: kube-system/kube-proxy-bdk4s/kube-proxy" id=61aa6279-53dd-475b-8976-ea47bb595ce8 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.499165494Z" level=info msg="Starting container: 9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892" id=81ccde5b-eae8-4431-818b-54abd03ca348 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.505198194Z" level=info msg="Started container" PID=2497 containerID=9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892 description=kube-system/kube-proxy-bdk4s/kube-proxy id=81ccde5b-eae8-4431-818b-54abd03ca348 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a46b81baeb92b0536258e746911a4db7395d911bde452eaa5049f27219bd363c
	Jan 11 08:47:11 pause-042270 crio[2168]: time="2026-01-11T08:47:11.452081402Z" level=info msg="Removing container: e4f927ff762a02e598b216fe9c75e5e7250c2463356b93b401329e89a8fd483d" id=dc254c97-a1f3-4ab1-b055-97ca496fa0ed name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 08:47:11 pause-042270 crio[2168]: time="2026-01-11T08:47:11.483846696Z" level=info msg="Removed container e4f927ff762a02e598b216fe9c75e5e7250c2463356b93b401329e89a8fd483d: kube-system/kube-scheduler-pause-042270/kube-scheduler" id=dc254c97-a1f3-4ab1-b055-97ca496fa0ed name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 08:47:11 pause-042270 crio[2168]: time="2026-01-11T08:47:11.485384531Z" level=info msg="Removing container: 7df702dfc7823702c012152170a5970a42c963801b47788b5c53f0a68a4b5b0a" id=8c947452-6883-450e-bc51-caff5d0d664a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 08:47:11 pause-042270 crio[2168]: time="2026-01-11T08:47:11.512780894Z" level=info msg="Removed container 7df702dfc7823702c012152170a5970a42c963801b47788b5c53f0a68a4b5b0a: kube-system/kube-controller-manager-pause-042270/kube-controller-manager" id=8c947452-6883-450e-bc51-caff5d0d664a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.572073978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.572112033Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.578226901Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.578441484Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.585292485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.585479835Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.595569108Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.59603922Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.596160501Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.603524978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.603557306Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b061ecb176606       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     22 seconds ago       Running             kube-apiserver            1                   22df88bdf853f       kube-apiserver-pause-042270            kube-system
	6757114d0bafd       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     22 seconds ago       Running             etcd                      1                   886cb95c86b81       etcd-pause-042270                      kube-system
	9e7781bd18991       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     22 seconds ago       Running             kube-proxy                2                   a46b81baeb92b       kube-proxy-bdk4s                       kube-system
	9b9a55dfc3ce9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     22 seconds ago       Running             kindnet-cni               2                   ac580480f2749       kindnet-45gwk                          kube-system
	d8e4dc716e9fb       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     22 seconds ago       Running             kube-controller-manager   2                   3bc3005987e88       kube-controller-manager-pause-042270   kube-system
	4f37daff12209       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     22 seconds ago       Running             kube-scheduler            2                   f559cbfea7e70       kube-scheduler-pause-042270            kube-system
	3386692eec9fe       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     22 seconds ago       Running             coredns                   2                   d479694e30c27       coredns-7d764666f9-rvvbr               kube-system
	608d40b7c34b0       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     About a minute ago   Created             coredns                   1                   d479694e30c27       coredns-7d764666f9-rvvbr               kube-system
	2ef6b516b54d3       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     About a minute ago   Created             kube-proxy                1                   a46b81baeb92b       kube-proxy-bdk4s                       kube-system
	0b7fcbbd82786       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     About a minute ago   Created             kindnet-cni               1                   ac580480f2749       kindnet-45gwk                          kube-system
	9bc6322fbfe5b       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            1                   f559cbfea7e70       kube-scheduler-pause-042270            kube-system
	a8599322e647e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   1                   3bc3005987e88       kube-controller-manager-pause-042270   kube-system
	9bc5caca97247       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     2 minutes ago        Exited              coredns                   0                   d479694e30c27       coredns-7d764666f9-rvvbr               kube-system
	4091f664f637a       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   2 minutes ago        Exited              kindnet-cni               0                   ac580480f2749       kindnet-45gwk                          kube-system
	c657c976d677e       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     2 minutes ago        Exited              kube-proxy                0                   a46b81baeb92b       kube-proxy-bdk4s                       kube-system
	043f80b890120       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     2 minutes ago        Exited              kube-apiserver            0                   22df88bdf853f       kube-apiserver-pause-042270            kube-system
	5bd088708d4de       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     2 minutes ago        Exited              etcd                      0                   886cb95c86b81       etcd-pause-042270                      kube-system
	
	
	==> coredns [3386692eec9fee759f4c5f30957286e96e3ffe1d2f0d8a8509abfb8f37a2466f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59269 - 65235 "HINFO IN 4471625385466506611.361694457902993617. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.035534569s
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> coredns [608d40b7c34b0aa005a4dc964b0820a313cca729a0940d77d4611c6f8f338715] <==
	
	
	==> coredns [9bc5caca9724764aa07f8310a52ec008336dd061840833714e8054f0bc2d4592] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37970 - 39153 "HINFO IN 5538964161996014554.1094676101687428058. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015967769s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-042270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-042270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=pause-042270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T08_45_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 08:45:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-042270
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 08:47:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 08:45:22 +0000   Sun, 11 Jan 2026 08:44:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 08:45:22 +0000   Sun, 11 Jan 2026 08:44:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 08:45:22 +0000   Sun, 11 Jan 2026 08:44:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 08:45:22 +0000   Sun, 11 Jan 2026 08:45:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-042270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                19c1111f-9168-4b54-8986-5e231c915609
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-rvvbr                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-pause-042270                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-45gwk                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-pause-042270             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-pause-042270    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-bdk4s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-pause-042270             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  2m23s  node-controller  Node pause-042270 event: Registered Node pause-042270 in Controller
	  Normal  RegisteredNode  13s    node-controller  Node pause-042270 event: Registered Node pause-042270 in Controller
	
	
	==> dmesg <==
	[Jan11 08:26] overlayfs: idmapped layers are currently not supported
	[Jan11 08:27] overlayfs: idmapped layers are currently not supported
	[  +2.584198] overlayfs: idmapped layers are currently not supported
	[Jan11 08:28] overlayfs: idmapped layers are currently not supported
	[ +33.770996] overlayfs: idmapped layers are currently not supported
	[Jan11 08:29] overlayfs: idmapped layers are currently not supported
	[  +3.600210] overlayfs: idmapped layers are currently not supported
	[Jan11 08:30] overlayfs: idmapped layers are currently not supported
	[Jan11 08:31] overlayfs: idmapped layers are currently not supported
	[Jan11 08:32] overlayfs: idmapped layers are currently not supported
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5bd088708d4deb3e026194575f822c0860ec80b9327e4f9e76e6e0fa14fbe2f1] <==
	{"level":"info","ts":"2026-01-11T08:44:57.760716Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T08:44:57.770250Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T08:44:57.770347Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T08:44:57.770612Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T08:44:57.770730Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T08:44:57.771385Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T08:44:57.798400Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-11T08:45:29.181152Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2026-01-11T08:45:29.181210Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-042270","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2026-01-11T08:45:29.181343Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-11T08:45:29.357304Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2026-01-11T08:45:29.358860Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-11T08:45:29.358923Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-11T08:45:29.358943Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2026-01-11T08:45:29.359014Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-11T08:45:29.359077Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-11T08:45:29.359121Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-11T08:45:29.359175Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2026-01-11T08:45:29.359235Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2026-01-11T08:45:29.359294Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2026-01-11T08:45:29.358759Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-11T08:45:29.362575Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2026-01-11T08:45:29.362726Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-11T08:45:29.362799Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T08:45:29.362844Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-042270","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e] <==
	{"level":"info","ts":"2026-01-11T08:47:09.512713Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T08:47:09.512840Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T08:47:09.514202Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-11T08:47:09.538969Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T08:47:09.550324Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T08:47:09.558789Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T08:47:09.558845Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T08:47:09.826315Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T08:47:09.826408Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T08:47:09.826474Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T08:47:09.826488Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T08:47:09.826504Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T08:47:09.830193Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T08:47:09.830255Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T08:47:09.830280Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T08:47:09.830289Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T08:47:09.834398Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-042270 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T08:47:09.834438Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T08:47:09.834472Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T08:47:09.842395Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T08:47:09.847673Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T08:47:09.847769Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T08:47:09.848573Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T08:47:09.856527Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T08:47:09.938940Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 08:47:31 up  3:30,  0 user,  load average: 3.22, 2.66, 2.60
	Linux pause-042270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b7fcbbd82786fed4387b48391bd12d068f8935bd11d2de482be853f78820f5f] <==
	
	
	==> kindnet [4091f664f637aede6861e233180d95399d5deaeeced980fb3d4654d7fd3396f3] <==
	I0111 08:45:12.540804       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 08:45:12.541056       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 08:45:12.541189       1 main.go:148] setting mtu 1500 for CNI 
	I0111 08:45:12.541208       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 08:45:12.541218       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T08:45:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 08:45:12.742969       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 08:45:12.743054       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 08:45:12.743091       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 08:45:12.744253       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 08:45:12.944294       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 08:45:12.944390       1 metrics.go:72] Registering metrics
	I0111 08:45:12.944474       1 controller.go:711] "Syncing nftables rules"
	I0111 08:45:22.743025       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 08:45:22.743675       1 main.go:301] handling current node
	
	
	==> kindnet [9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084] <==
	I0111 08:47:09.359982       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 08:47:09.360391       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 08:47:09.360581       1 main.go:148] setting mtu 1500 for CNI 
	I0111 08:47:09.360629       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 08:47:09.360664       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T08:47:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 08:47:09.563767       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 08:47:09.563865       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 08:47:09.563903       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 08:47:09.564784       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 08:47:15.365991       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 08:47:15.366084       1 metrics.go:72] Registering metrics
	I0111 08:47:15.366277       1 controller.go:711] "Syncing nftables rules"
	I0111 08:47:19.563330       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 08:47:19.563456       1 main.go:301] handling current node
	I0111 08:47:29.563864       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 08:47:29.563934       1 main.go:301] handling current node
	
	
	==> kube-apiserver [043f80b8901207f1b00f2e5d8307335f58820fe7d929fc27e1c0b07106271e96] <==
	W0111 08:45:29.219218       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219296       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219345       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219391       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219467       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219516       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219578       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219630       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219681       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219730       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219782       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219833       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219890       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219938       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219988       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220038       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220096       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220143       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220197       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220257       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220304       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220353       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220401       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.221702       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b061ecb176606fab39561201f0b787b9dd4d71b0fc0623474ac1a6dc66c8e2c4] <==
	I0111 08:47:15.163757       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:15.163801       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 08:47:15.182748       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 08:47:15.196889       1 aggregator.go:187] initial CRD sync complete...
	I0111 08:47:15.216014       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 08:47:15.216106       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 08:47:15.216138       1 cache.go:39] Caches are synced for autoregister controller
	I0111 08:47:15.210644       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:15.222789       1 policy_source.go:248] refreshing policies
	I0111 08:47:15.206543       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 08:47:15.224703       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 08:47:15.230198       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 08:47:15.231042       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0111 08:47:15.231212       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 08:47:15.239889       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 08:47:15.244804       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 08:47:15.273018       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 08:47:15.293181       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E0111 08:47:15.344370       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 08:47:15.656422       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 08:47:17.121025       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 08:47:18.420497       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 08:47:18.575983       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 08:47:18.615494       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 08:47:18.714646       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [a8599322e647ea17e3e1d9753183eca263c26a46964d0822f85fab8f2399a7fa] <==
	
	
	==> kube-controller-manager [d8e4dc716e9fbad51f33509cd8d8d0eb48040e799342b510f6b5274aab249c86] <==
	I0111 08:47:18.257691       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.257734       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.257922       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.259731       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.259839       1 range_allocator.go:177] "Sending events to api server"
	I0111 08:47:18.259873       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0111 08:47:18.259877       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:47:18.259882       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.259967       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.270610       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.271594       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.271641       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.271657       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.271696       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.274060       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.274139       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.275907       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.276001       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.278790       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:47:18.295989       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.326574       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.357853       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.357889       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 08:47:18.357898       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 08:47:18.393241       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [2ef6b516b54d3ef537d1455d60abd47dafe430aefdb427a778bb5733ef2f39a4] <==
	
	
	==> kube-proxy [9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892] <==
	I0111 08:47:11.313427       1 server_linux.go:53] "Using iptables proxy"
	I0111 08:47:11.768914       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:47:15.375411       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:15.376266       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 08:47:15.378420       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 08:47:15.459996       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 08:47:15.460098       1 server_linux.go:136] "Using iptables Proxier"
	I0111 08:47:15.466238       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 08:47:15.526031       1 server.go:529] "Version info" version="v1.35.0"
	I0111 08:47:15.526065       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 08:47:15.555176       1 config.go:200] "Starting service config controller"
	I0111 08:47:15.555270       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 08:47:15.558249       1 config.go:106] "Starting endpoint slice config controller"
	I0111 08:47:15.558330       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 08:47:15.564405       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 08:47:15.564491       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 08:47:15.566908       1 config.go:309] "Starting node config controller"
	I0111 08:47:15.568319       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 08:47:15.568401       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 08:47:15.657701       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 08:47:15.659322       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 08:47:15.664918       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c657c976d677e9e5a67a63345b859b3473205cff8345aad91fcee3a3485251a6] <==
	I0111 08:45:10.409416       1 server_linux.go:53] "Using iptables proxy"
	I0111 08:45:10.504599       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:45:10.605037       1 shared_informer.go:377] "Caches are synced"
	I0111 08:45:10.605068       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 08:45:10.605161       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 08:45:10.630121       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 08:45:10.630199       1 server_linux.go:136] "Using iptables Proxier"
	I0111 08:45:10.634716       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 08:45:10.635201       1 server.go:529] "Version info" version="v1.35.0"
	I0111 08:45:10.635217       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 08:45:10.642517       1 config.go:200] "Starting service config controller"
	I0111 08:45:10.642536       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 08:45:10.642554       1 config.go:106] "Starting endpoint slice config controller"
	I0111 08:45:10.642558       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 08:45:10.642571       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 08:45:10.642574       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 08:45:10.643236       1 config.go:309] "Starting node config controller"
	I0111 08:45:10.643245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 08:45:10.643253       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 08:45:10.743402       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 08:45:10.743435       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 08:45:10.743467       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4f37daff12209c7cbe5088130ea4aea7c5917b3aef9b3d2100f02d6698061862] <==
	I0111 08:47:11.303006       1 serving.go:386] Generated self-signed cert in-memory
	W0111 08:47:14.914401       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 08:47:14.914514       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 08:47:14.914549       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 08:47:14.914591       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 08:47:15.156500       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 08:47:15.164524       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 08:47:15.171409       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 08:47:15.171500       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 08:47:15.189825       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:47:15.171522       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 08:47:15.304738       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [9bc6322fbfe5b3aeb3cc28d0de46bd73d50006f6f24385238e2d536bfb5ca556] <==
	
	
	==> kubelet <==
	Jan 11 08:47:11 pause-042270 kubelet[1306]: E0111 08:47:11.438828    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-042270" containerName="kube-controller-manager"
	Jan 11 08:47:11 pause-042270 kubelet[1306]: I0111 08:47:11.439488    1306 scope.go:122] "RemoveContainer" containerID="e4f927ff762a02e598b216fe9c75e5e7250c2463356b93b401329e89a8fd483d"
	Jan 11 08:47:11 pause-042270 kubelet[1306]: I0111 08:47:11.484262    1306 scope.go:122] "RemoveContainer" containerID="7df702dfc7823702c012152170a5970a42c963801b47788b5c53f0a68a4b5b0a"
	Jan 11 08:47:11 pause-042270 kubelet[1306]: E0111 08:47:11.953449    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-042270" containerName="kube-scheduler"
	Jan 11 08:47:12 pause-042270 kubelet[1306]: E0111 08:47:12.960486    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-042270" containerName="kube-scheduler"
	Jan 11 08:47:14 pause-042270 kubelet[1306]: E0111 08:47:14.857042    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-042270\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="d7383fa44257ed5f93002c69daf59f20" pod="kube-system/kube-apiserver-pause-042270"
	Jan 11 08:47:14 pause-042270 kubelet[1306]: E0111 08:47:14.959726    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-042270\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="f4f884d12ab36489436115387489b6b5" pod="kube-system/kube-controller-manager-pause-042270"
	Jan 11 08:47:15 pause-042270 kubelet[1306]: E0111 08:47:15.057633    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-bdk4s\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="e4b86581-45ce-4c68-b7d0-c1a7f3ef088f" pod="kube-system/kube-proxy-bdk4s"
	Jan 11 08:47:15 pause-042270 kubelet[1306]: E0111 08:47:15.161819    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-45gwk\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="7a16ed15-2c49-4c4a-90a5-bc8d0439b6b0" pod="kube-system/kindnet-45gwk"
	Jan 11 08:47:15 pause-042270 kubelet[1306]: E0111 08:47:15.188733    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-042270" containerName="etcd"
	Jan 11 08:47:15 pause-042270 kubelet[1306]: E0111 08:47:15.214677    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-rvvbr\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="b97d5e73-1b07-4f9e-afdb-f28f370a600e" pod="kube-system/coredns-7d764666f9-rvvbr"
	Jan 11 08:47:16 pause-042270 kubelet[1306]: E0111 08:47:16.876306    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-042270" containerName="kube-apiserver"
	Jan 11 08:47:16 pause-042270 kubelet[1306]: E0111 08:47:16.918040    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-042270" containerName="kube-controller-manager"
	Jan 11 08:47:21 pause-042270 kubelet[1306]: E0111 08:47:21.440668    1306 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-rvvbr" containerName="coredns"
	Jan 11 08:47:22 pause-042270 kubelet[1306]: E0111 08:47:22.061740    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-042270" containerName="kube-scheduler"
	Jan 11 08:47:22 pause-042270 kubelet[1306]: E0111 08:47:22.999656    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-042270" containerName="kube-scheduler"
	Jan 11 08:47:24 pause-042270 kubelet[1306]: W0111 08:47:24.380370    1306 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Jan 11 08:47:25 pause-042270 kubelet[1306]: E0111 08:47:25.190043    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-042270" containerName="etcd"
	Jan 11 08:47:26 pause-042270 kubelet[1306]: E0111 08:47:26.012916    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-042270" containerName="etcd"
	Jan 11 08:47:26 pause-042270 kubelet[1306]: E0111 08:47:26.908097    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-042270" containerName="kube-apiserver"
	Jan 11 08:47:26 pause-042270 kubelet[1306]: E0111 08:47:26.961453    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-042270" containerName="kube-controller-manager"
	Jan 11 08:47:27 pause-042270 kubelet[1306]: E0111 08:47:27.015806    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-042270" containerName="kube-apiserver"
	Jan 11 08:47:28 pause-042270 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 08:47:28 pause-042270 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 08:47:28 pause-042270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-042270 -n pause-042270
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-042270 -n pause-042270: exit status 2 (386.155273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-042270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-042270
helpers_test.go:244: (dbg) docker inspect pause-042270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4",
	        "Created": "2026-01-11T08:44:39.089747168Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 701189,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T08:44:39.26172845Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4/hostname",
	        "HostsPath": "/var/lib/docker/containers/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4/hosts",
	        "LogPath": "/var/lib/docker/containers/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4/4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4-json.log",
	        "Name": "/pause-042270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-042270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-042270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4561dea88724b8217c4d7c26ccf9df4cd1546ddd1ac261d29ff8c915cca31ae4",
	                "LowerDir": "/var/lib/docker/overlay2/145a0059f90af945ada96f805e1d8fcd8809c3f69b4c236e4cc6db6090ea0ff7-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/145a0059f90af945ada96f805e1d8fcd8809c3f69b4c236e4cc6db6090ea0ff7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/145a0059f90af945ada96f805e1d8fcd8809c3f69b4c236e4cc6db6090ea0ff7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/145a0059f90af945ada96f805e1d8fcd8809c3f69b4c236e4cc6db6090ea0ff7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-042270",
	                "Source": "/var/lib/docker/volumes/pause-042270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-042270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-042270",
	                "name.minikube.sigs.k8s.io": "pause-042270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f40d843f4a1f982b7ecd90ecd0abae6c6226f2c27a5013548c8e7983f087b85",
	            "SandboxKey": "/var/run/docker/netns/3f40d843f4a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33698"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33699"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33702"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33700"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33701"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-042270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:6b:b2:fa:ac:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70b55988d363dc5beae59a4f2c0270f01d6c8f47c86a4e8f237248f42184fb91",
	                    "EndpointID": "ec397850e897e14f151ae4a76b88a93261b626959ea9cff59667b64c859cce6c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-042270",
	                        "4561dea88724"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-042270 -n pause-042270
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-042270 -n pause-042270: exit status 2 (423.363932ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-042270 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-042270 logs -n 25: (2.032171792s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p multinode-869861-m03                                                                                                                  │ multinode-869861-m03        │ jenkins │ v1.37.0 │ 11 Jan 26 08:42 UTC │ 11 Jan 26 08:42 UTC │
	│ delete  │ -p multinode-869861                                                                                                                      │ multinode-869861            │ jenkins │ v1.37.0 │ 11 Jan 26 08:42 UTC │ 11 Jan 26 08:42 UTC │
	│ start   │ -p scheduled-stop-415795 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:42 UTC │ 11 Jan 26 08:43 UTC │
	│ stop    │ -p scheduled-stop-415795 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --cancel-scheduled                                                                                              │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │ 11 Jan 26 08:43 UTC │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │                     │
	│ stop    │ -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:43 UTC │ 11 Jan 26 08:43 UTC │
	│ delete  │ -p scheduled-stop-415795                                                                                                                 │ scheduled-stop-415795       │ jenkins │ v1.37.0 │ 11 Jan 26 08:44 UTC │ 11 Jan 26 08:44 UTC │
	│ start   │ -p insufficient-storage-616205 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-616205 │ jenkins │ v1.37.0 │ 11 Jan 26 08:44 UTC │                     │
	│ delete  │ -p insufficient-storage-616205                                                                                                           │ insufficient-storage-616205 │ jenkins │ v1.37.0 │ 11 Jan 26 08:44 UTC │ 11 Jan 26 08:44 UTC │
	│ start   │ -p pause-042270 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-042270                │ jenkins │ v1.37.0 │ 11 Jan 26 08:44 UTC │ 11 Jan 26 08:45 UTC │
	│ start   │ -p missing-upgrade-819079 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-819079      │ jenkins │ v1.35.0 │ 11 Jan 26 08:44 UTC │ 11 Jan 26 08:45 UTC │
	│ start   │ -p pause-042270 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-042270                │ jenkins │ v1.37.0 │ 11 Jan 26 08:45 UTC │ 11 Jan 26 08:47 UTC │
	│ start   │ -p missing-upgrade-819079 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-819079      │ jenkins │ v1.37.0 │ 11 Jan 26 08:45 UTC │ 11 Jan 26 08:46 UTC │
	│ delete  │ -p missing-upgrade-819079                                                                                                                │ missing-upgrade-819079      │ jenkins │ v1.37.0 │ 11 Jan 26 08:46 UTC │ 11 Jan 26 08:46 UTC │
	│ start   │ -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-102854   │ jenkins │ v1.37.0 │ 11 Jan 26 08:46 UTC │ 11 Jan 26 08:46 UTC │
	│ stop    │ -p kubernetes-upgrade-102854 --alsologtostderr                                                                                           │ kubernetes-upgrade-102854   │ jenkins │ v1.37.0 │ 11 Jan 26 08:46 UTC │ 11 Jan 26 08:46 UTC │
	│ start   │ -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-102854   │ jenkins │ v1.37.0 │ 11 Jan 26 08:46 UTC │                     │
	│ pause   │ -p pause-042270 --alsologtostderr -v=5                                                                                                   │ pause-042270                │ jenkins │ v1.37.0 │ 11 Jan 26 08:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:46:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:46:59.900853  711632 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:46:59.901254  711632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:46:59.901268  711632 out.go:374] Setting ErrFile to fd 2...
	I0111 08:46:59.901276  711632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:46:59.902005  711632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:46:59.902644  711632 out.go:368] Setting JSON to false
	I0111 08:46:59.903604  711632 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12570,"bootTime":1768108650,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:46:59.903804  711632 start.go:143] virtualization:  
	I0111 08:46:59.906779  711632 out.go:179] * [kubernetes-upgrade-102854] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:46:59.909137  711632 notify.go:221] Checking for updates...
	I0111 08:46:59.909697  711632 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:46:59.912952  711632 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:46:59.915858  711632 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:46:59.918746  711632 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:46:59.921537  711632 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:46:59.924458  711632 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:46:59.927856  711632 config.go:182] Loaded profile config "kubernetes-upgrade-102854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 08:46:59.928463  711632 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:46:59.955361  711632 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:46:59.955481  711632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:47:00.067955  711632 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:47:00.032988496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:47:00.068083  711632 docker.go:319] overlay module found
	I0111 08:47:00.074106  711632 out.go:179] * Using the docker driver based on existing profile
	I0111 08:47:00.077194  711632 start.go:309] selected driver: docker
	I0111 08:47:00.077217  711632 start.go:928] validating driver "docker" against &{Name:kubernetes-upgrade-102854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:47:00.077320  711632 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:47:00.078224  711632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:47:00.257598  711632 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:47:00.235416306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:47:00.258000  711632 cni.go:84] Creating CNI manager for ""
	I0111 08:47:00.258065  711632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:47:00.258108  711632 start.go:353] cluster config:
	{Name:kubernetes-upgrade-102854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:47:00.261629  711632 out.go:179] * Starting "kubernetes-upgrade-102854" primary control-plane node in "kubernetes-upgrade-102854" cluster
	I0111 08:47:00.266873  711632 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 08:47:00.271317  711632 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:47:00.274823  711632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:47:00.274826  711632 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:47:00.274913  711632 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 08:47:00.274933  711632 cache.go:65] Caching tarball of preloaded images
	I0111 08:47:00.275030  711632 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 08:47:00.275039  711632 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 08:47:00.275158  711632 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/config.json ...
	I0111 08:47:00.314397  711632 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:47:00.314423  711632 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:47:00.314441  711632 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:47:00.314477  711632 start.go:360] acquireMachinesLock for kubernetes-upgrade-102854: {Name:mka28b58380642840c174fda94f450ba2ccc60e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:47:00.314553  711632 start.go:364] duration metric: took 57.42µs to acquireMachinesLock for "kubernetes-upgrade-102854"
	I0111 08:47:00.314577  711632 start.go:96] Skipping create...Using existing machine configuration
	I0111 08:47:00.314583  711632 fix.go:54] fixHost starting: 
	I0111 08:47:00.314881  711632 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-102854 --format={{.State.Status}}
	I0111 08:47:00.346431  711632 fix.go:112] recreateIfNeeded on kubernetes-upgrade-102854: state=Stopped err=<nil>
	W0111 08:47:00.346496  711632 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 08:47:00.350577  711632 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-102854" ...
	I0111 08:47:00.350736  711632 cli_runner.go:164] Run: docker start kubernetes-upgrade-102854
	I0111 08:47:00.660117  711632 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-102854 --format={{.State.Status}}
	I0111 08:47:00.684742  711632 kic.go:430] container "kubernetes-upgrade-102854" state is running.
	I0111 08:47:00.685136  711632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-102854
	I0111 08:47:00.708208  711632 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/config.json ...
	I0111 08:47:00.708659  711632 machine.go:94] provisionDockerMachine start ...
	I0111 08:47:00.708751  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:00.732808  711632 main.go:144] libmachine: Using SSH client type: native
	I0111 08:47:00.733142  711632 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33718 <nil> <nil>}
	I0111 08:47:00.733151  711632 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:47:00.734590  711632 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52560->127.0.0.1:33718: read: connection reset by peer
	I0111 08:47:03.881909  711632 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-102854
	
	I0111 08:47:03.881953  711632 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-102854"
	I0111 08:47:03.882045  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:03.901181  711632 main.go:144] libmachine: Using SSH client type: native
	I0111 08:47:03.901491  711632 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33718 <nil> <nil>}
	I0111 08:47:03.901503  711632 main.go:144] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-102854 && echo "kubernetes-upgrade-102854" | sudo tee /etc/hostname
	I0111 08:47:04.063383  711632 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-102854
	
	I0111 08:47:04.063459  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:04.081363  711632 main.go:144] libmachine: Using SSH client type: native
	I0111 08:47:04.081659  711632 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33718 <nil> <nil>}
	I0111 08:47:04.081675  711632 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-102854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-102854/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-102854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:47:04.230356  711632 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:47:04.230381  711632 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 08:47:04.230419  711632 ubuntu.go:190] setting up certificates
	I0111 08:47:04.230428  711632 provision.go:84] configureAuth start
	I0111 08:47:04.230495  711632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-102854
	I0111 08:47:04.257408  711632 provision.go:143] copyHostCerts
	I0111 08:47:04.257476  711632 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 08:47:04.257494  711632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 08:47:04.257573  711632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 08:47:04.257671  711632 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 08:47:04.257681  711632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 08:47:04.257708  711632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 08:47:04.257770  711632 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 08:47:04.257780  711632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 08:47:04.257804  711632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 08:47:04.257855  711632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-102854 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-102854 localhost minikube]
	I0111 08:47:04.327278  711632 provision.go:177] copyRemoteCerts
	I0111 08:47:04.327349  711632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:47:04.327401  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:04.344374  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:04.451244  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0111 08:47:04.470583  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 08:47:04.489628  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:47:04.507925  711632 provision.go:87] duration metric: took 277.472791ms to configureAuth
	I0111 08:47:04.508009  711632 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:47:04.508222  711632 config.go:182] Loaded profile config "kubernetes-upgrade-102854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:47:04.508340  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:04.526224  711632 main.go:144] libmachine: Using SSH client type: native
	I0111 08:47:04.526538  711632 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33718 <nil> <nil>}
	I0111 08:47:04.526560  711632 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 08:47:04.846041  711632 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 08:47:04.846063  711632 machine.go:97] duration metric: took 4.137390995s to provisionDockerMachine
	I0111 08:47:04.846075  711632 start.go:293] postStartSetup for "kubernetes-upgrade-102854" (driver="docker")
	I0111 08:47:04.846087  711632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:47:04.846177  711632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:47:04.846221  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:04.866752  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:04.969748  711632 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:47:04.973050  711632 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:47:04.973080  711632 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:47:04.973092  711632 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 08:47:04.973149  711632 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 08:47:04.973233  711632 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 08:47:04.973341  711632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:47:04.980875  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:47:04.998472  711632 start.go:296] duration metric: took 152.382002ms for postStartSetup
	I0111 08:47:04.998670  711632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:47:04.998723  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:05.018171  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:05.119555  711632 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:47:05.124995  711632 fix.go:56] duration metric: took 4.810405116s for fixHost
	I0111 08:47:05.125023  711632 start.go:83] releasing machines lock for "kubernetes-upgrade-102854", held for 4.810460707s
	I0111 08:47:05.125101  711632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-102854
	I0111 08:47:05.142900  711632 ssh_runner.go:195] Run: cat /version.json
	I0111 08:47:05.142920  711632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:47:05.142952  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:05.142985  711632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-102854
	I0111 08:47:05.167888  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:05.174341  711632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33718 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/kubernetes-upgrade-102854/id_rsa Username:docker}
	I0111 08:47:05.270005  711632 ssh_runner.go:195] Run: systemctl --version
	I0111 08:47:05.370735  711632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 08:47:05.409356  711632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:47:05.413964  711632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:47:05.414040  711632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:47:05.422326  711632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 08:47:05.422353  711632 start.go:496] detecting cgroup driver to use...
	I0111 08:47:05.422386  711632 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 08:47:05.422436  711632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:47:05.438193  711632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:47:05.451600  711632 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:47:05.451669  711632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:47:05.467317  711632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:47:05.480741  711632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:47:05.598814  711632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:47:05.714703  711632 docker.go:234] disabling docker service ...
	I0111 08:47:05.714804  711632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:47:05.731280  711632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:47:05.744962  711632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:47:05.851763  711632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:47:05.968857  711632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:47:05.981598  711632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:47:05.997806  711632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 08:47:05.997874  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.013613  711632 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 08:47:06.013695  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.025391  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.036050  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.046108  711632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:47:06.055563  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.064815  711632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.074113  711632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 08:47:06.083362  711632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:47:06.090970  711632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:47:06.098776  711632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:47:06.214159  711632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 08:47:06.418146  711632 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 08:47:06.418229  711632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 08:47:06.422263  711632 start.go:574] Will wait 60s for crictl version
	I0111 08:47:06.422338  711632 ssh_runner.go:195] Run: which crictl
	I0111 08:47:06.425837  711632 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:47:06.451140  711632 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 08:47:06.451234  711632 ssh_runner.go:195] Run: crio --version
	I0111 08:47:06.479033  711632 ssh_runner.go:195] Run: crio --version
	I0111 08:47:06.511532  711632 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 08:47:07.149718  705342 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.535080118s)
	I0111 08:47:07.149745  705342 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 08:47:07.149797  705342 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 08:47:07.155884  705342 start.go:574] Will wait 60s for crictl version
	I0111 08:47:07.155949  705342 ssh_runner.go:195] Run: which crictl
	I0111 08:47:07.167644  705342 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:47:07.206840  705342 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 08:47:07.206927  705342 ssh_runner.go:195] Run: crio --version
	I0111 08:47:07.253381  705342 ssh_runner.go:195] Run: crio --version
	I0111 08:47:07.297530  705342 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 08:47:07.300535  705342 cli_runner.go:164] Run: docker network inspect pause-042270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:47:07.318546  705342 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 08:47:07.323192  705342 kubeadm.go:884] updating cluster {Name:pause-042270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-042270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:47:07.323366  705342 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:47:07.323433  705342 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:47:07.376125  705342 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:47:07.376151  705342 crio.go:433] Images already preloaded, skipping extraction
	I0111 08:47:07.376208  705342 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:47:07.416160  705342 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:47:07.416186  705342 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:47:07.416195  705342 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0111 08:47:07.416292  705342 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-042270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-042270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:47:07.416375  705342 ssh_runner.go:195] Run: crio config
	I0111 08:47:07.492884  705342 cni.go:84] Creating CNI manager for ""
	I0111 08:47:07.492909  705342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:47:07.492932  705342 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:47:07.492954  705342 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-042270 NodeName:pause-042270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:47:07.493087  705342 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-042270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:47:07.493166  705342 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:47:07.506662  705342 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:47:07.506766  705342 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:47:07.516730  705342 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0111 08:47:07.532885  705342 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:47:07.551069  705342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0111 08:47:07.566749  705342 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:47:07.571129  705342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:47:07.769273  705342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:47:07.794856  705342 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270 for IP: 192.168.76.2
	I0111 08:47:07.794879  705342 certs.go:195] generating shared ca certs ...
	I0111 08:47:07.794898  705342 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:07.795070  705342 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 08:47:07.795118  705342 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 08:47:07.795130  705342 certs.go:257] generating profile certs ...
	I0111 08:47:07.795252  705342 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.key
	I0111 08:47:07.795333  705342 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/apiserver.key.b14d61a9
	I0111 08:47:07.795419  705342 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/proxy-client.key
	I0111 08:47:07.795548  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 08:47:07.795596  705342 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 08:47:07.795609  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:47:07.795635  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:47:07.795662  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:47:07.795698  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 08:47:07.795763  705342 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:47:07.796461  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:47:07.821453  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 08:47:07.842689  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:47:07.866019  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:47:07.889080  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0111 08:47:07.911240  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:47:07.932805  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:47:07.955112  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:47:07.977148  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 08:47:07.998790  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:47:08.021965  705342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 08:47:08.044381  705342 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:47:08.060719  705342 ssh_runner.go:195] Run: openssl version
	I0111 08:47:08.068231  705342 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 08:47:08.076894  705342 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 08:47:08.085582  705342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 08:47:08.089925  705342 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 08:47:08.090002  705342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 08:47:08.132329  705342 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:47:08.141129  705342 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 08:47:08.149604  705342 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 08:47:08.158266  705342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 08:47:08.162668  705342 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 08:47:08.162752  705342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 08:47:08.207327  705342 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:47:08.215949  705342 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:08.224205  705342 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:47:08.232626  705342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:08.237144  705342 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:08.237218  705342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:08.279593  705342 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:47:08.288292  705342 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:47:08.292819  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 08:47:08.342291  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 08:47:08.387439  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 08:47:08.431089  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 08:47:08.476263  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 08:47:08.519574  705342 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 08:47:08.571393  705342 kubeadm.go:401] StartCluster: {Name:pause-042270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-042270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:47:08.571509  705342 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:47:08.571572  705342 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:47:08.604139  705342 cri.go:96] found id: "608d40b7c34b0aa005a4dc964b0820a313cca729a0940d77d4611c6f8f338715"
	I0111 08:47:08.604168  705342 cri.go:96] found id: "2ef6b516b54d3ef537d1455d60abd47dafe430aefdb427a778bb5733ef2f39a4"
	I0111 08:47:08.604173  705342 cri.go:96] found id: "0b7fcbbd82786fed4387b48391bd12d068f8935bd11d2de482be853f78820f5f"
	I0111 08:47:08.604176  705342 cri.go:96] found id: "9bc6322fbfe5b3aeb3cc28d0de46bd73d50006f6f24385238e2d536bfb5ca556"
	I0111 08:47:08.604179  705342 cri.go:96] found id: "a8599322e647ea17e3e1d9753183eca263c26a46964d0822f85fab8f2399a7fa"
	I0111 08:47:08.604183  705342 cri.go:96] found id: "9bc5caca9724764aa07f8310a52ec008336dd061840833714e8054f0bc2d4592"
	I0111 08:47:08.604185  705342 cri.go:96] found id: "4091f664f637aede6861e233180d95399d5deaeeced980fb3d4654d7fd3396f3"
	I0111 08:47:08.604188  705342 cri.go:96] found id: "c657c976d677e9e5a67a63345b859b3473205cff8345aad91fcee3a3485251a6"
	I0111 08:47:08.604192  705342 cri.go:96] found id: "e4f927ff762a02e598b216fe9c75e5e7250c2463356b93b401329e89a8fd483d"
	I0111 08:47:08.604199  705342 cri.go:96] found id: "043f80b8901207f1b00f2e5d8307335f58820fe7d929fc27e1c0b07106271e96"
	I0111 08:47:08.604202  705342 cri.go:96] found id: "7df702dfc7823702c012152170a5970a42c963801b47788b5c53f0a68a4b5b0a"
	I0111 08:47:08.604205  705342 cri.go:96] found id: "5bd088708d4deb3e026194575f822c0860ec80b9327e4f9e76e6e0fa14fbe2f1"
	I0111 08:47:08.604223  705342 cri.go:96] found id: ""
	I0111 08:47:08.604275  705342 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 08:47:08.617811  705342 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:47:08Z" level=error msg="open /run/runc: no such file or directory"
	I0111 08:47:08.617879  705342 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:47:08.626017  705342 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 08:47:08.626041  705342 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 08:47:08.626106  705342 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 08:47:08.633515  705342 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 08:47:08.639231  705342 kubeconfig.go:125] found "pause-042270" server: "https://192.168.76.2:8443"
	I0111 08:47:08.640063  705342 kapi.go:59] client config for pause-042270: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.key", CAFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 08:47:08.640614  705342 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0111 08:47:08.640640  705342 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0111 08:47:08.640652  705342 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0111 08:47:08.640658  705342 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0111 08:47:08.640663  705342 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0111 08:47:08.640668  705342 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0111 08:47:08.640982  705342 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 08:47:08.668764  705342 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0111 08:47:08.668801  705342 kubeadm.go:602] duration metric: took 42.753789ms to restartPrimaryControlPlane
	I0111 08:47:08.668812  705342 kubeadm.go:403] duration metric: took 97.427729ms to StartCluster
	I0111 08:47:08.668830  705342 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:08.668899  705342 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:47:08.669540  705342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:08.669738  705342 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 08:47:08.670075  705342 config.go:182] Loaded profile config "pause-042270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:47:08.670155  705342 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 08:47:08.675753  705342 out.go:179] * Enabled addons: 
	I0111 08:47:08.675815  705342 out.go:179] * Verifying Kubernetes components...
	I0111 08:47:06.514484  711632 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-102854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:47:06.531299  711632 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 08:47:06.535265  711632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:47:06.545120  711632 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-102854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:47:06.545231  711632 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 08:47:06.545285  711632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:47:06.579703  711632 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I0111 08:47:06.579772  711632 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
	I0111 08:47:06.608312  711632 crio.go:450] Found 9 existing images, backing up...
	I0111 08:47:06.608398  711632 ssh_runner.go:195] Run: mktemp -d
	I0111 08:47:06.613776  711632 crio.go:290] Saving image docker.io/kindest/kindnetd:v20230511-dc714da8: /tmp/tmp.6IdUP1cGGz/b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79.tar
	I0111 08:47:06.613851  711632 ssh_runner.go:195] Run: sudo podman save docker.io/kindest/kindnetd:v20230511-dc714da8 -o /tmp/tmp.6IdUP1cGGz/b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79.tar
	I0111 08:47:07.027888  711632 crio.go:290] Saving image gcr.io/k8s-minikube/storage-provisioner:v5: /tmp/tmp.6IdUP1cGGz/ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6.tar
	I0111 08:47:07.027971  711632 ssh_runner.go:195] Run: sudo podman save gcr.io/k8s-minikube/storage-provisioner:v5 -o /tmp/tmp.6IdUP1cGGz/ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6.tar
	I0111 08:47:07.219171  711632 crio.go:290] Saving image registry.k8s.io/coredns/coredns:v1.10.1: /tmp/tmp.6IdUP1cGGz/97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108.tar
	I0111 08:47:07.219261  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/coredns/coredns:v1.10.1 -o /tmp/tmp.6IdUP1cGGz/97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108.tar
	I0111 08:47:07.525467  711632 crio.go:290] Saving image registry.k8s.io/etcd:3.5.9-0: /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar
	I0111 08:47:07.525544  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/etcd:3.5.9-0 -o /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar
	I0111 08:47:08.534454  711632 ssh_runner.go:235] Completed: sudo podman save registry.k8s.io/etcd:3.5.9-0 -o /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar: (1.008889787s)
	I0111 08:47:08.534482  711632 crio.go:290] Saving image registry.k8s.io/kube-apiserver:v1.28.0: /tmp/tmp.6IdUP1cGGz/00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766.tar
	I0111 08:47:08.534533  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/kube-apiserver:v1.28.0 -o /tmp/tmp.6IdUP1cGGz/00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766.tar
	I0111 08:47:09.407816  711632 crio.go:290] Saving image registry.k8s.io/kube-controller-manager:v1.28.0: /tmp/tmp.6IdUP1cGGz/46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5.tar
	I0111 08:47:09.407879  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/kube-controller-manager:v1.28.0 -o /tmp/tmp.6IdUP1cGGz/46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5.tar
	I0111 08:47:08.678953  705342 addons.go:530] duration metric: took 8.81957ms for enable addons: enabled=[]
	I0111 08:47:08.679055  705342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:47:09.030336  705342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:47:09.082685  705342 node_ready.go:35] waiting up to 6m0s for node "pause-042270" to be "Ready" ...
	I0111 08:47:10.364183  711632 crio.go:290] Saving image registry.k8s.io/kube-proxy:v1.28.0: /tmp/tmp.6IdUP1cGGz/940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d.tar
	I0111 08:47:10.364273  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/kube-proxy:v1.28.0 -o /tmp/tmp.6IdUP1cGGz/940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d.tar
	I0111 08:47:10.947414  711632 crio.go:290] Saving image registry.k8s.io/kube-scheduler:v1.28.0: /tmp/tmp.6IdUP1cGGz/762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee.tar
	I0111 08:47:10.947495  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/kube-scheduler:v1.28.0 -o /tmp/tmp.6IdUP1cGGz/762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee.tar
	I0111 08:47:11.549336  711632 crio.go:290] Saving image registry.k8s.io/pause:3.9: /tmp/tmp.6IdUP1cGGz/829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e.tar
	I0111 08:47:11.549404  711632 ssh_runner.go:195] Run: sudo podman save registry.k8s.io/pause:3.9 -o /tmp/tmp.6IdUP1cGGz/829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e.tar
	I0111 08:47:11.632309  711632 ssh_runner.go:195] Run: which lz4
	I0111 08:47:11.635980  711632 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0111 08:47:11.640063  711632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0111 08:47:11.640097  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306152852 bytes)
	I0111 08:47:13.635070  711632 crio.go:496] duration metric: took 1.999121615s to copy over tarball
	I0111 08:47:13.635192  711632 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0111 08:47:14.959750  705342 node_ready.go:49] node "pause-042270" is "Ready"
	I0111 08:47:14.959781  705342 node_ready.go:38] duration metric: took 5.877057511s for node "pause-042270" to be "Ready" ...
	I0111 08:47:14.959796  705342 api_server.go:52] waiting for apiserver process to appear ...
	I0111 08:47:14.959861  705342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:47:15.041141  705342 api_server.go:72] duration metric: took 6.371362572s to wait for apiserver process to appear ...
	I0111 08:47:15.041169  705342 api_server.go:88] waiting for apiserver healthz status ...
	I0111 08:47:15.041193  705342 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 08:47:15.177698  705342 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 08:47:15.177730  705342 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 08:47:15.542073  705342 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 08:47:15.562336  705342 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 08:47:15.562365  705342 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 08:47:16.041515  705342 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 08:47:16.054622  705342 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 08:47:16.054669  705342 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 08:47:16.541247  705342 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 08:47:16.550796  705342 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0111 08:47:16.552012  705342 api_server.go:141] control plane version: v1.35.0
	I0111 08:47:16.552044  705342 api_server.go:131] duration metric: took 1.510863571s to wait for apiserver health ...
	I0111 08:47:16.552085  705342 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 08:47:16.557783  705342 system_pods.go:59] 7 kube-system pods found
	I0111 08:47:16.557818  705342 system_pods.go:61] "coredns-7d764666f9-rvvbr" [b97d5e73-1b07-4f9e-afdb-f28f370a600e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:47:16.557827  705342 system_pods.go:61] "etcd-pause-042270" [f7798498-721b-4c9b-aec7-658a3bb8a17e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:47:16.557835  705342 system_pods.go:61] "kindnet-45gwk" [7a16ed15-2c49-4c4a-90a5-bc8d0439b6b0] Running
	I0111 08:47:16.557842  705342 system_pods.go:61] "kube-apiserver-pause-042270" [31eed741-7615-49d7-939e-cd2bd5220ea3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 08:47:16.557855  705342 system_pods.go:61] "kube-controller-manager-pause-042270" [9ad8453d-aab9-462a-8b5b-3a4da7e5f958] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 08:47:16.557865  705342 system_pods.go:61] "kube-proxy-bdk4s" [e4b86581-45ce-4c68-b7d0-c1a7f3ef088f] Running
	I0111 08:47:16.557872  705342 system_pods.go:61] "kube-scheduler-pause-042270" [338504f7-7c37-42b1-a7bd-d1bd5f08794c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 08:47:16.557880  705342 system_pods.go:74] duration metric: took 5.789201ms to wait for pod list to return data ...
	I0111 08:47:16.557892  705342 default_sa.go:34] waiting for default service account to be created ...
	I0111 08:47:16.561089  705342 default_sa.go:45] found service account: "default"
	I0111 08:47:16.561117  705342 default_sa.go:55] duration metric: took 3.218326ms for default service account to be created ...
	I0111 08:47:16.561130  705342 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 08:47:16.564342  705342 system_pods.go:86] 7 kube-system pods found
	I0111 08:47:16.564381  705342 system_pods.go:89] "coredns-7d764666f9-rvvbr" [b97d5e73-1b07-4f9e-afdb-f28f370a600e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:47:16.564391  705342 system_pods.go:89] "etcd-pause-042270" [f7798498-721b-4c9b-aec7-658a3bb8a17e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:47:16.564397  705342 system_pods.go:89] "kindnet-45gwk" [7a16ed15-2c49-4c4a-90a5-bc8d0439b6b0] Running
	I0111 08:47:16.564405  705342 system_pods.go:89] "kube-apiserver-pause-042270" [31eed741-7615-49d7-939e-cd2bd5220ea3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 08:47:16.564412  705342 system_pods.go:89] "kube-controller-manager-pause-042270" [9ad8453d-aab9-462a-8b5b-3a4da7e5f958] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 08:47:16.564418  705342 system_pods.go:89] "kube-proxy-bdk4s" [e4b86581-45ce-4c68-b7d0-c1a7f3ef088f] Running
	I0111 08:47:16.564430  705342 system_pods.go:89] "kube-scheduler-pause-042270" [338504f7-7c37-42b1-a7bd-d1bd5f08794c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 08:47:16.564439  705342 system_pods.go:126] duration metric: took 3.302988ms to wait for k8s-apps to be running ...
	I0111 08:47:16.564452  705342 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 08:47:16.564508  705342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:47:16.580135  705342 system_svc.go:56] duration metric: took 15.674189ms WaitForService to wait for kubelet
	I0111 08:47:16.580168  705342 kubeadm.go:587] duration metric: took 7.910397238s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 08:47:16.580187  705342 node_conditions.go:102] verifying NodePressure condition ...
	I0111 08:47:16.583398  705342 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 08:47:16.583439  705342 node_conditions.go:123] node cpu capacity is 2
	I0111 08:47:16.583453  705342 node_conditions.go:105] duration metric: took 3.260854ms to run NodePressure ...
	I0111 08:47:16.583467  705342 start.go:242] waiting for startup goroutines ...
	I0111 08:47:16.583474  705342 start.go:247] waiting for cluster config update ...
	I0111 08:47:16.583483  705342 start.go:256] writing updated cluster config ...
	I0111 08:47:16.583797  705342 ssh_runner.go:195] Run: rm -f paused
	I0111 08:47:16.587768  705342 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 08:47:16.588381  705342 kapi.go:59] client config for pause-042270: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/pause-042270/client.key", CAFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 08:47:16.592013  705342 pod_ready.go:83] waiting for pod "coredns-7d764666f9-rvvbr" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:16.518804  711632 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.883561924s)
	I0111 08:47:16.518831  711632 crio.go:503] duration metric: took 2.883693759s to extract the tarball
	I0111 08:47:16.518839  711632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0111 08:47:16.557050  711632 crio.go:511] Restoring backed up images...
	I0111 08:47:16.557071  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79.tar
	I0111 08:47:16.557140  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79.tar
	I0111 08:47:17.497976  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6.tar
	I0111 08:47:17.498043  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6.tar
	I0111 08:47:17.625977  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108.tar
	I0111 08:47:17.626046  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108.tar
	I0111 08:47:18.382088  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar
	I0111 08:47:18.382184  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar
	W0111 08:47:18.602703  705342 pod_ready.go:104] pod "coredns-7d764666f9-rvvbr" is not "Ready", error: <nil>
	W0111 08:47:21.098729  705342 pod_ready.go:104] pod "coredns-7d764666f9-rvvbr" is not "Ready", error: <nil>
	I0111 08:47:21.597371  705342 pod_ready.go:94] pod "coredns-7d764666f9-rvvbr" is "Ready"
	I0111 08:47:21.597406  705342 pod_ready.go:86] duration metric: took 5.005325172s for pod "coredns-7d764666f9-rvvbr" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:21.600355  705342 pod_ready.go:83] waiting for pod "etcd-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:20.468944  711632 ssh_runner.go:235] Completed: sudo podman load -i /tmp/tmp.6IdUP1cGGz/9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace.tar: (2.086731957s)
	I0111 08:47:20.468985  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766.tar
	I0111 08:47:20.469037  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766.tar
	I0111 08:47:21.361876  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5.tar
	I0111 08:47:21.361982  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5.tar
	I0111 08:47:22.275598  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d.tar
	I0111 08:47:22.275669  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d.tar
	I0111 08:47:23.086738  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee.tar
	I0111 08:47:23.086808  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee.tar
	I0111 08:47:23.588201  711632 crio.go:275] Loading image: /tmp/tmp.6IdUP1cGGz/829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e.tar
	I0111 08:47:23.588267  711632 ssh_runner.go:195] Run: sudo podman load -i /tmp/tmp.6IdUP1cGGz/829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e.tar
	I0111 08:47:23.741781  711632 ssh_runner.go:195] Run: rm -rf /tmp/tmp.6IdUP1cGGz
	I0111 08:47:23.826250  711632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:47:23.876375  711632 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 08:47:23.876400  711632 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:47:23.876409  711632 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 08:47:23.876507  711632 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-102854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:47:23.876593  711632 ssh_runner.go:195] Run: crio config
	I0111 08:47:23.939460  711632 cni.go:84] Creating CNI manager for ""
	I0111 08:47:23.939490  711632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:47:23.939513  711632 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:47:23.939536  711632 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-102854 NodeName:kubernetes-upgrade-102854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:47:23.939689  711632 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-102854"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:47:23.939770  711632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:47:23.948456  711632 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:47:23.948536  711632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:47:23.955741  711632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0111 08:47:23.969014  711632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:47:23.982040  711632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2242 bytes)
	I0111 08:47:23.994919  711632 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:47:23.998575  711632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:47:24.012228  711632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:47:24.153648  711632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:47:24.170506  711632 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854 for IP: 192.168.85.2
	I0111 08:47:24.170582  711632 certs.go:195] generating shared ca certs ...
	I0111 08:47:24.170614  711632 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:24.170797  711632 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 08:47:24.170891  711632 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 08:47:24.170917  711632 certs.go:257] generating profile certs ...
	I0111 08:47:24.171045  711632 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/client.key
	I0111 08:47:24.171165  711632 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/apiserver.key.cdcbcf04
	I0111 08:47:24.171230  711632 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/proxy-client.key
	I0111 08:47:24.171381  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 08:47:24.171447  711632 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 08:47:24.171471  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:47:24.171535  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:47:24.171591  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:47:24.171646  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 08:47:24.171742  711632 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 08:47:24.172451  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:47:24.198471  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 08:47:24.225434  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:47:24.247943  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:47:24.267927  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:47:24.295185  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:47:24.321273  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:47:24.343995  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 08:47:24.361846  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:47:24.382251  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 08:47:24.401829  711632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 08:47:24.421712  711632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:47:24.435085  711632 ssh_runner.go:195] Run: openssl version
	I0111 08:47:24.443680  711632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 08:47:24.451933  711632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 08:47:24.460524  711632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 08:47:24.464497  711632 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 08:47:24.464613  711632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 08:47:24.505313  711632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:47:24.513215  711632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:24.520981  711632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:47:24.528686  711632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:24.532758  711632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:24.532896  711632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:47:24.573907  711632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:47:24.581633  711632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 08:47:24.589715  711632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 08:47:24.597460  711632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 08:47:24.601741  711632 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 08:47:24.601835  711632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 08:47:24.645605  711632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:47:24.653378  711632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:47:24.657178  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 08:47:24.698887  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 08:47:24.740441  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 08:47:24.782344  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 08:47:24.823946  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 08:47:24.866688  711632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	W0111 08:47:23.607308  705342 pod_ready.go:104] pod "etcd-pause-042270" is not "Ready", error: <nil>
	I0111 08:47:25.605660  705342 pod_ready.go:94] pod "etcd-pause-042270" is "Ready"
	I0111 08:47:25.605685  705342 pod_ready.go:86] duration metric: took 4.005297212s for pod "etcd-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:25.608750  705342 pod_ready.go:83] waiting for pod "kube-apiserver-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.115367  705342 pod_ready.go:94] pod "kube-apiserver-pause-042270" is "Ready"
	I0111 08:47:27.115448  705342 pod_ready.go:86] duration metric: took 1.506678286s for pod "kube-apiserver-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.117902  705342 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.123176  705342 pod_ready.go:94] pod "kube-controller-manager-pause-042270" is "Ready"
	I0111 08:47:27.123256  705342 pod_ready.go:86] duration metric: took 5.327442ms for pod "kube-controller-manager-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.125531  705342 pod_ready.go:83] waiting for pod "kube-proxy-bdk4s" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.130562  705342 pod_ready.go:94] pod "kube-proxy-bdk4s" is "Ready"
	I0111 08:47:27.130643  705342 pod_ready.go:86] duration metric: took 5.08895ms for pod "kube-proxy-bdk4s" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.203628  705342 pod_ready.go:83] waiting for pod "kube-scheduler-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.604304  705342 pod_ready.go:94] pod "kube-scheduler-pause-042270" is "Ready"
	I0111 08:47:27.604378  705342 pod_ready.go:86] duration metric: took 400.722308ms for pod "kube-scheduler-pause-042270" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:47:27.604412  705342 pod_ready.go:40] duration metric: took 11.016566665s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 08:47:27.699553  705342 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 08:47:27.703743  705342 out.go:203] 
	W0111 08:47:27.706853  705342 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 08:47:27.709970  705342 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 08:47:27.713097  705342 out.go:179] * Done! kubectl is now configured to use "pause-042270" cluster and "default" namespace by default
	I0111 08:47:24.908676  711632 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-102854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:kubernetes-upgrade-102854 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:47:24.908767  711632 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 08:47:24.908882  711632 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:47:24.937286  711632 cri.go:96] found id: ""
	I0111 08:47:24.937362  711632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:47:24.945400  711632 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 08:47:24.945421  711632 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 08:47:24.945498  711632 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 08:47:24.953247  711632 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 08:47:24.953801  711632 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-102854" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:47:24.954053  711632 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-575040/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-102854" cluster setting kubeconfig missing "kubernetes-upgrade-102854" context setting]
	I0111 08:47:24.954560  711632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:47:24.955231  711632 kapi.go:59] client config for kubernetes-upgrade-102854: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/client.crt", KeyFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kubernetes-upgrade-102854/client.key", CAFile:"/home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0111 08:47:24.955796  711632 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0111 08:47:24.955819  711632 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0111 08:47:24.955825  711632 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0111 08:47:24.955830  711632 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0111 08:47:24.955834  711632 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0111 08:47:24.955838  711632 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0111 08:47:24.956096  711632 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 08:47:24.965803  711632 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2026-01-11 08:46:40.555241336 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2026-01-11 08:47:23.987850844 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-102854"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	@@ -51,6 +54,7 @@
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	 containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	+failCgroupV1: false
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I0111 08:47:24.965884  711632 kubeadm.go:1161] stopping kube-system containers ...
	I0111 08:47:24.965903  711632 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0111 08:47:24.965960  711632 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:47:24.993756  711632 cri.go:96] found id: ""
	I0111 08:47:24.993826  711632 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0111 08:47:25.017185  711632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:47:25.025869  711632 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Jan 11 08:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 11 08:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jan 11 08:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 11 08:46 /etc/kubernetes/scheduler.conf
	
	I0111 08:47:25.026026  711632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:47:25.035283  711632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:47:25.044435  711632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:47:25.053361  711632 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0111 08:47:25.053458  711632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:47:25.061401  711632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:47:25.075839  711632 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0111 08:47:25.075965  711632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:47:25.084650  711632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:47:25.093500  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:25.150060  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:26.623560  711632 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.473353571s)
	I0111 08:47:26.623674  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:26.830705  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:26.895921  711632 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0111 08:47:26.973337  711632 api_server.go:52] waiting for apiserver process to appear ...
	I0111 08:47:26.973415  711632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:47:27.473621  711632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:47:27.974171  711632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:47:28.011962  711632 api_server.go:72] duration metric: took 1.038634909s to wait for apiserver process to appear ...
	I0111 08:47:28.011992  711632 api_server.go:88] waiting for apiserver healthz status ...
	I0111 08:47:28.012014  711632 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 08:47:31.063858  711632 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 08:47:31.063891  711632 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 08:47:31.063906  711632 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 08:47:31.072821  711632 api_server.go:325] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0111 08:47:31.072893  711632 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0111 08:47:31.512401  711632 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 08:47:31.530859  711632 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 08:47:31.530930  711632 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 08:47:32.012183  711632 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 08:47:32.021552  711632 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 08:47:32.021633  711632 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 08:47:32.512065  711632 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 08:47:32.521015  711632 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 08:47:32.521042  711632 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 08:47:33.012156  711632 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 08:47:33.021860  711632 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 08:47:33.045457  711632 api_server.go:141] control plane version: v1.35.0
	I0111 08:47:33.045490  711632 api_server.go:131] duration metric: took 5.033489934s to wait for apiserver health ...
	I0111 08:47:33.045501  711632 cni.go:84] Creating CNI manager for ""
	I0111 08:47:33.045507  711632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:47:33.048666  711632 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.167801736Z" level=info msg="Started container" PID=2491 containerID=b061ecb176606fab39561201f0b787b9dd4d71b0fc0623474ac1a6dc66c8e2c4 description=kube-system/kube-apiserver-pause-042270/kube-apiserver id=09163e85-071e-41ac-9ef2-805befb15cef name=/runtime.v1.RuntimeService/StartContainer sandboxID=22df88bdf853f0b25200b13cf7ac09687ecefc35f7d96a5107361a20b33f94e3
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.177380857Z" level=info msg="Created container 6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e: kube-system/etcd-pause-042270/etcd" id=a077f238-8442-414a-b0d3-1b4e23d64820 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.179465572Z" level=info msg="Starting container: 6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e" id=3a54c5b7-7d97-47fd-b7a6-40cbe8442664 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.181149173Z" level=info msg="Created container 9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084: kube-system/kindnet-45gwk/kindnet-cni" id=6e054cf0-3e9a-4fa4-9503-db4797f7c217 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.1819663Z" level=info msg="Starting container: 9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084" id=80f728cd-6842-4a98-b833-d9404fa6a275 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.183615677Z" level=info msg="Started container" PID=2499 containerID=6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e description=kube-system/etcd-pause-042270/etcd id=3a54c5b7-7d97-47fd-b7a6-40cbe8442664 name=/runtime.v1.RuntimeService/StartContainer sandboxID=886cb95c86b817ca65def38e7019d79e42c9891eccebb8a04ec6508e8c786373
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.189705395Z" level=info msg="Started container" PID=2485 containerID=9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084 description=kube-system/kindnet-45gwk/kindnet-cni id=80f728cd-6842-4a98-b833-d9404fa6a275 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac580480f27494fa1d617f5ea0edddc8144e5061b7e157d10a79a511cd7b9518
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.49853287Z" level=info msg="Created container 9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892: kube-system/kube-proxy-bdk4s/kube-proxy" id=61aa6279-53dd-475b-8976-ea47bb595ce8 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.499165494Z" level=info msg="Starting container: 9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892" id=81ccde5b-eae8-4431-818b-54abd03ca348 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 08:47:09 pause-042270 crio[2168]: time="2026-01-11T08:47:09.505198194Z" level=info msg="Started container" PID=2497 containerID=9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892 description=kube-system/kube-proxy-bdk4s/kube-proxy id=81ccde5b-eae8-4431-818b-54abd03ca348 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a46b81baeb92b0536258e746911a4db7395d911bde452eaa5049f27219bd363c
	Jan 11 08:47:11 pause-042270 crio[2168]: time="2026-01-11T08:47:11.452081402Z" level=info msg="Removing container: e4f927ff762a02e598b216fe9c75e5e7250c2463356b93b401329e89a8fd483d" id=dc254c97-a1f3-4ab1-b055-97ca496fa0ed name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 08:47:11 pause-042270 crio[2168]: time="2026-01-11T08:47:11.483846696Z" level=info msg="Removed container e4f927ff762a02e598b216fe9c75e5e7250c2463356b93b401329e89a8fd483d: kube-system/kube-scheduler-pause-042270/kube-scheduler" id=dc254c97-a1f3-4ab1-b055-97ca496fa0ed name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 08:47:11 pause-042270 crio[2168]: time="2026-01-11T08:47:11.485384531Z" level=info msg="Removing container: 7df702dfc7823702c012152170a5970a42c963801b47788b5c53f0a68a4b5b0a" id=8c947452-6883-450e-bc51-caff5d0d664a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 08:47:11 pause-042270 crio[2168]: time="2026-01-11T08:47:11.512780894Z" level=info msg="Removed container 7df702dfc7823702c012152170a5970a42c963801b47788b5c53f0a68a4b5b0a: kube-system/kube-controller-manager-pause-042270/kube-controller-manager" id=8c947452-6883-450e-bc51-caff5d0d664a name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.572073978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.572112033Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.578226901Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.578441484Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.585292485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.585479835Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.595569108Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.59603922Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.596160501Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.603524978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 08:47:19 pause-042270 crio[2168]: time="2026-01-11T08:47:19.603557306Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b061ecb176606       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     25 seconds ago       Running             kube-apiserver            1                   22df88bdf853f       kube-apiserver-pause-042270            kube-system
	6757114d0bafd       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     25 seconds ago       Running             etcd                      1                   886cb95c86b81       etcd-pause-042270                      kube-system
	9e7781bd18991       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     25 seconds ago       Running             kube-proxy                2                   a46b81baeb92b       kube-proxy-bdk4s                       kube-system
	9b9a55dfc3ce9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     25 seconds ago       Running             kindnet-cni               2                   ac580480f2749       kindnet-45gwk                          kube-system
	d8e4dc716e9fb       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     25 seconds ago       Running             kube-controller-manager   2                   3bc3005987e88       kube-controller-manager-pause-042270   kube-system
	4f37daff12209       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     25 seconds ago       Running             kube-scheduler            2                   f559cbfea7e70       kube-scheduler-pause-042270            kube-system
	3386692eec9fe       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     25 seconds ago       Running             coredns                   2                   d479694e30c27       coredns-7d764666f9-rvvbr               kube-system
	608d40b7c34b0       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     About a minute ago   Created             coredns                   1                   d479694e30c27       coredns-7d764666f9-rvvbr               kube-system
	2ef6b516b54d3       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     About a minute ago   Created             kube-proxy                1                   a46b81baeb92b       kube-proxy-bdk4s                       kube-system
	0b7fcbbd82786       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     About a minute ago   Created             kindnet-cni               1                   ac580480f2749       kindnet-45gwk                          kube-system
	9bc6322fbfe5b       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            1                   f559cbfea7e70       kube-scheduler-pause-042270            kube-system
	a8599322e647e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   1                   3bc3005987e88       kube-controller-manager-pause-042270   kube-system
	9bc5caca97247       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     2 minutes ago        Exited              coredns                   0                   d479694e30c27       coredns-7d764666f9-rvvbr               kube-system
	4091f664f637a       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   2 minutes ago        Exited              kindnet-cni               0                   ac580480f2749       kindnet-45gwk                          kube-system
	c657c976d677e       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     2 minutes ago        Exited              kube-proxy                0                   a46b81baeb92b       kube-proxy-bdk4s                       kube-system
	043f80b890120       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     2 minutes ago        Exited              kube-apiserver            0                   22df88bdf853f       kube-apiserver-pause-042270            kube-system
	5bd088708d4de       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     2 minutes ago        Exited              etcd                      0                   886cb95c86b81       etcd-pause-042270                      kube-system
	
	
	==> coredns [3386692eec9fee759f4c5f30957286e96e3ffe1d2f0d8a8509abfb8f37a2466f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59269 - 65235 "HINFO IN 4471625385466506611.361694457902993617. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.035534569s
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> coredns [608d40b7c34b0aa005a4dc964b0820a313cca729a0940d77d4611c6f8f338715] <==
	
	
	==> coredns [9bc5caca9724764aa07f8310a52ec008336dd061840833714e8054f0bc2d4592] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37970 - 39153 "HINFO IN 5538964161996014554.1094676101687428058. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015967769s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-042270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-042270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=pause-042270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T08_45_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 08:45:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-042270
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 08:47:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 08:45:22 +0000   Sun, 11 Jan 2026 08:44:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 08:45:22 +0000   Sun, 11 Jan 2026 08:44:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 08:45:22 +0000   Sun, 11 Jan 2026 08:44:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 08:45:22 +0000   Sun, 11 Jan 2026 08:45:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-042270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                19c1111f-9168-4b54-8986-5e231c915609
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-rvvbr                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-pause-042270                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-45gwk                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-pause-042270             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-pause-042270    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-bdk4s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-pause-042270             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  2m26s  node-controller  Node pause-042270 event: Registered Node pause-042270 in Controller
	  Normal  RegisteredNode  16s    node-controller  Node pause-042270 event: Registered Node pause-042270 in Controller
	
	
	==> dmesg <==
	[Jan11 08:26] overlayfs: idmapped layers are currently not supported
	[Jan11 08:27] overlayfs: idmapped layers are currently not supported
	[  +2.584198] overlayfs: idmapped layers are currently not supported
	[Jan11 08:28] overlayfs: idmapped layers are currently not supported
	[ +33.770996] overlayfs: idmapped layers are currently not supported
	[Jan11 08:29] overlayfs: idmapped layers are currently not supported
	[  +3.600210] overlayfs: idmapped layers are currently not supported
	[Jan11 08:30] overlayfs: idmapped layers are currently not supported
	[Jan11 08:31] overlayfs: idmapped layers are currently not supported
	[Jan11 08:32] overlayfs: idmapped layers are currently not supported
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5bd088708d4deb3e026194575f822c0860ec80b9327e4f9e76e6e0fa14fbe2f1] <==
	{"level":"info","ts":"2026-01-11T08:44:57.760716Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T08:44:57.770250Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T08:44:57.770347Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T08:44:57.770612Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T08:44:57.770730Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T08:44:57.771385Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T08:44:57.798400Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-11T08:45:29.181152Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2026-01-11T08:45:29.181210Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-042270","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2026-01-11T08:45:29.181343Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-11T08:45:29.357304Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2026-01-11T08:45:29.358860Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-11T08:45:29.358923Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-11T08:45:29.358943Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2026-01-11T08:45:29.359014Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-11T08:45:29.359077Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-11T08:45:29.359121Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-11T08:45:29.359175Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2026-01-11T08:45:29.359235Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2026-01-11T08:45:29.359294Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2026-01-11T08:45:29.358759Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-11T08:45:29.362575Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2026-01-11T08:45:29.362726Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-11T08:45:29.362799Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T08:45:29.362844Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-042270","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [6757114d0bafdfdc9e1e9a1d717d07e9d57e8a08cff36664f3830bd435d07c8e] <==
	{"level":"info","ts":"2026-01-11T08:47:09.512713Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T08:47:09.512840Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T08:47:09.514202Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-11T08:47:09.538969Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T08:47:09.550324Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T08:47:09.558789Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T08:47:09.558845Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T08:47:09.826315Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T08:47:09.826408Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T08:47:09.826474Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T08:47:09.826488Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T08:47:09.826504Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T08:47:09.830193Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T08:47:09.830255Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T08:47:09.830280Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T08:47:09.830289Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T08:47:09.834398Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-042270 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T08:47:09.834438Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T08:47:09.834472Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T08:47:09.842395Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T08:47:09.847673Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T08:47:09.847769Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T08:47:09.848573Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T08:47:09.856527Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T08:47:09.938940Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 08:47:34 up  3:30,  0 user,  load average: 3.22, 2.66, 2.60
	Linux pause-042270 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b7fcbbd82786fed4387b48391bd12d068f8935bd11d2de482be853f78820f5f] <==
	
	
	==> kindnet [4091f664f637aede6861e233180d95399d5deaeeced980fb3d4654d7fd3396f3] <==
	I0111 08:45:12.540804       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 08:45:12.541056       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 08:45:12.541189       1 main.go:148] setting mtu 1500 for CNI 
	I0111 08:45:12.541208       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 08:45:12.541218       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T08:45:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 08:45:12.742969       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 08:45:12.743054       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 08:45:12.743091       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 08:45:12.744253       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 08:45:12.944294       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 08:45:12.944390       1 metrics.go:72] Registering metrics
	I0111 08:45:12.944474       1 controller.go:711] "Syncing nftables rules"
	I0111 08:45:22.743025       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 08:45:22.743675       1 main.go:301] handling current node
	
	
	==> kindnet [9b9a55dfc3ce9cd0cb4e9ff91cf50836a88a208da4303f3ead3af6b677e6d084] <==
	I0111 08:47:09.359982       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 08:47:09.360391       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 08:47:09.360581       1 main.go:148] setting mtu 1500 for CNI 
	I0111 08:47:09.360629       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 08:47:09.360664       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T08:47:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 08:47:09.563767       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 08:47:09.563865       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 08:47:09.563903       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 08:47:09.564784       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 08:47:15.365991       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 08:47:15.366084       1 metrics.go:72] Registering metrics
	I0111 08:47:15.366277       1 controller.go:711] "Syncing nftables rules"
	I0111 08:47:19.563330       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 08:47:19.563456       1 main.go:301] handling current node
	I0111 08:47:29.563864       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 08:47:29.563934       1 main.go:301] handling current node
	
	
	==> kube-apiserver [043f80b8901207f1b00f2e5d8307335f58820fe7d929fc27e1c0b07106271e96] <==
	W0111 08:45:29.219218       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219296       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219345       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219391       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219467       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219516       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219578       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219630       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219681       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219730       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219782       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219833       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219890       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219938       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.219988       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220038       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220096       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220143       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220197       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220257       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220304       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220353       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.220401       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0111 08:45:29.221702       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b061ecb176606fab39561201f0b787b9dd4d71b0fc0623474ac1a6dc66c8e2c4] <==
	I0111 08:47:15.163757       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:15.163801       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 08:47:15.182748       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 08:47:15.196889       1 aggregator.go:187] initial CRD sync complete...
	I0111 08:47:15.216014       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 08:47:15.216106       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 08:47:15.216138       1 cache.go:39] Caches are synced for autoregister controller
	I0111 08:47:15.210644       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:15.222789       1 policy_source.go:248] refreshing policies
	I0111 08:47:15.206543       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 08:47:15.224703       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 08:47:15.230198       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 08:47:15.231042       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0111 08:47:15.231212       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 08:47:15.239889       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 08:47:15.244804       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 08:47:15.273018       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 08:47:15.293181       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E0111 08:47:15.344370       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 08:47:15.656422       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 08:47:17.121025       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 08:47:18.420497       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 08:47:18.575983       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 08:47:18.615494       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 08:47:18.714646       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [a8599322e647ea17e3e1d9753183eca263c26a46964d0822f85fab8f2399a7fa] <==
	
	
	==> kube-controller-manager [d8e4dc716e9fbad51f33509cd8d8d0eb48040e799342b510f6b5274aab249c86] <==
	I0111 08:47:18.257691       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.257734       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.257922       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.259731       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.259839       1 range_allocator.go:177] "Sending events to api server"
	I0111 08:47:18.259873       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0111 08:47:18.259877       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:47:18.259882       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.259967       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.270610       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.271594       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.271641       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.271657       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.271696       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.274060       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.274139       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.275907       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.276001       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.278790       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:47:18.295989       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.326574       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.357853       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:18.357889       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 08:47:18.357898       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 08:47:18.393241       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [2ef6b516b54d3ef537d1455d60abd47dafe430aefdb427a778bb5733ef2f39a4] <==
	
	
	==> kube-proxy [9e7781bd18991b364c5844f04276556b6c10c7136844673ea950edbde5503892] <==
	I0111 08:47:11.313427       1 server_linux.go:53] "Using iptables proxy"
	I0111 08:47:11.768914       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:47:15.375411       1 shared_informer.go:377] "Caches are synced"
	I0111 08:47:15.376266       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 08:47:15.378420       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 08:47:15.459996       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 08:47:15.460098       1 server_linux.go:136] "Using iptables Proxier"
	I0111 08:47:15.466238       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 08:47:15.526031       1 server.go:529] "Version info" version="v1.35.0"
	I0111 08:47:15.526065       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 08:47:15.555176       1 config.go:200] "Starting service config controller"
	I0111 08:47:15.555270       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 08:47:15.558249       1 config.go:106] "Starting endpoint slice config controller"
	I0111 08:47:15.558330       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 08:47:15.564405       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 08:47:15.564491       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 08:47:15.566908       1 config.go:309] "Starting node config controller"
	I0111 08:47:15.568319       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 08:47:15.568401       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 08:47:15.657701       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 08:47:15.659322       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 08:47:15.664918       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c657c976d677e9e5a67a63345b859b3473205cff8345aad91fcee3a3485251a6] <==
	I0111 08:45:10.409416       1 server_linux.go:53] "Using iptables proxy"
	I0111 08:45:10.504599       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:45:10.605037       1 shared_informer.go:377] "Caches are synced"
	I0111 08:45:10.605068       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 08:45:10.605161       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 08:45:10.630121       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 08:45:10.630199       1 server_linux.go:136] "Using iptables Proxier"
	I0111 08:45:10.634716       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 08:45:10.635201       1 server.go:529] "Version info" version="v1.35.0"
	I0111 08:45:10.635217       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 08:45:10.642517       1 config.go:200] "Starting service config controller"
	I0111 08:45:10.642536       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 08:45:10.642554       1 config.go:106] "Starting endpoint slice config controller"
	I0111 08:45:10.642558       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 08:45:10.642571       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 08:45:10.642574       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 08:45:10.643236       1 config.go:309] "Starting node config controller"
	I0111 08:45:10.643245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 08:45:10.643253       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 08:45:10.743402       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 08:45:10.743435       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 08:45:10.743467       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4f37daff12209c7cbe5088130ea4aea7c5917b3aef9b3d2100f02d6698061862] <==
	I0111 08:47:11.303006       1 serving.go:386] Generated self-signed cert in-memory
	W0111 08:47:14.914401       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 08:47:14.914514       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 08:47:14.914549       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 08:47:14.914591       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 08:47:15.156500       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 08:47:15.164524       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 08:47:15.171409       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 08:47:15.171500       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 08:47:15.189825       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 08:47:15.171522       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 08:47:15.304738       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [9bc6322fbfe5b3aeb3cc28d0de46bd73d50006f6f24385238e2d536bfb5ca556] <==
	
	
	==> kubelet <==
	Jan 11 08:47:11 pause-042270 kubelet[1306]: E0111 08:47:11.438828    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-042270" containerName="kube-controller-manager"
	Jan 11 08:47:11 pause-042270 kubelet[1306]: I0111 08:47:11.439488    1306 scope.go:122] "RemoveContainer" containerID="e4f927ff762a02e598b216fe9c75e5e7250c2463356b93b401329e89a8fd483d"
	Jan 11 08:47:11 pause-042270 kubelet[1306]: I0111 08:47:11.484262    1306 scope.go:122] "RemoveContainer" containerID="7df702dfc7823702c012152170a5970a42c963801b47788b5c53f0a68a4b5b0a"
	Jan 11 08:47:11 pause-042270 kubelet[1306]: E0111 08:47:11.953449    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-042270" containerName="kube-scheduler"
	Jan 11 08:47:12 pause-042270 kubelet[1306]: E0111 08:47:12.960486    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-042270" containerName="kube-scheduler"
	Jan 11 08:47:14 pause-042270 kubelet[1306]: E0111 08:47:14.857042    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-042270\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="d7383fa44257ed5f93002c69daf59f20" pod="kube-system/kube-apiserver-pause-042270"
	Jan 11 08:47:14 pause-042270 kubelet[1306]: E0111 08:47:14.959726    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-042270\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="f4f884d12ab36489436115387489b6b5" pod="kube-system/kube-controller-manager-pause-042270"
	Jan 11 08:47:15 pause-042270 kubelet[1306]: E0111 08:47:15.057633    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-bdk4s\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="e4b86581-45ce-4c68-b7d0-c1a7f3ef088f" pod="kube-system/kube-proxy-bdk4s"
	Jan 11 08:47:15 pause-042270 kubelet[1306]: E0111 08:47:15.161819    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-45gwk\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="7a16ed15-2c49-4c4a-90a5-bc8d0439b6b0" pod="kube-system/kindnet-45gwk"
	Jan 11 08:47:15 pause-042270 kubelet[1306]: E0111 08:47:15.188733    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-042270" containerName="etcd"
	Jan 11 08:47:15 pause-042270 kubelet[1306]: E0111 08:47:15.214677    1306 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-rvvbr\" is forbidden: User \"system:node:pause-042270\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-042270' and this object" podUID="b97d5e73-1b07-4f9e-afdb-f28f370a600e" pod="kube-system/coredns-7d764666f9-rvvbr"
	Jan 11 08:47:16 pause-042270 kubelet[1306]: E0111 08:47:16.876306    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-042270" containerName="kube-apiserver"
	Jan 11 08:47:16 pause-042270 kubelet[1306]: E0111 08:47:16.918040    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-042270" containerName="kube-controller-manager"
	Jan 11 08:47:21 pause-042270 kubelet[1306]: E0111 08:47:21.440668    1306 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-rvvbr" containerName="coredns"
	Jan 11 08:47:22 pause-042270 kubelet[1306]: E0111 08:47:22.061740    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-042270" containerName="kube-scheduler"
	Jan 11 08:47:22 pause-042270 kubelet[1306]: E0111 08:47:22.999656    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-042270" containerName="kube-scheduler"
	Jan 11 08:47:24 pause-042270 kubelet[1306]: W0111 08:47:24.380370    1306 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Jan 11 08:47:25 pause-042270 kubelet[1306]: E0111 08:47:25.190043    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-042270" containerName="etcd"
	Jan 11 08:47:26 pause-042270 kubelet[1306]: E0111 08:47:26.012916    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-042270" containerName="etcd"
	Jan 11 08:47:26 pause-042270 kubelet[1306]: E0111 08:47:26.908097    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-042270" containerName="kube-apiserver"
	Jan 11 08:47:26 pause-042270 kubelet[1306]: E0111 08:47:26.961453    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-042270" containerName="kube-controller-manager"
	Jan 11 08:47:27 pause-042270 kubelet[1306]: E0111 08:47:27.015806    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-042270" containerName="kube-apiserver"
	Jan 11 08:47:28 pause-042270 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 08:47:28 pause-042270 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 08:47:28 pause-042270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-042270 -n pause-042270
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-042270 -n pause-042270: exit status 2 (601.549299ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-042270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (260.167996ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:03:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-931581 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-931581 describe deploy/metrics-server -n kube-system: exit status 1 (91.04099ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-931581 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-931581
helpers_test.go:244: (dbg) docker inspect old-k8s-version-931581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b",
	        "Created": "2026-01-11T09:02:21.912162594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 765214,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:02:21.98247727Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/hostname",
	        "HostsPath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/hosts",
	        "LogPath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b-json.log",
	        "Name": "/old-k8s-version-931581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-931581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-931581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b",
	                "LowerDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-931581",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-931581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-931581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-931581",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-931581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7742d29145993f27a6c675cadf98afdfb3174d4af081946b280b812955075156",
	            "SandboxKey": "/var/run/docker/netns/7742d2914599",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-931581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:cd:43:04:ed:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b56797f12ccaa56ea8e718a635d68c0d137f49a40ab56b2bf2b5a235f2e0cf2",
	                    "EndpointID": "79b6db56e053f1a180ec1c357ff271d2e076842d83296585c85897dadd93de1d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-931581",
	                        "93b661cce923"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-931581 -n old-k8s-version-931581
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-931581 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-931581 logs -n 25: (1.172992849s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-293572 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo containerd config dump                                                                                                                                                                                                  │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo crio config                                                                                                                                                                                                             │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ delete  │ -p cilium-293572                                                                                                                                                                                                                              │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:55 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:56 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ delete  │ -p cert-expiration-448134                                                                                                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ start   │ -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-630015 │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │                     │
	│ delete  │ -p force-systemd-env-472282                                                                                                                                                                                                                   │ force-systemd-env-472282  │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:01 UTC │
	│ start   │ -p cert-options-459267 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ cert-options-459267 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ -p cert-options-459267 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ delete  │ -p cert-options-459267                                                                                                                                                                                                                        │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:02:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:02:16.266203  764777 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:02:16.266388  764777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:02:16.266400  764777 out.go:374] Setting ErrFile to fd 2...
	I0111 09:02:16.266405  764777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:02:16.266690  764777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:02:16.267143  764777 out.go:368] Setting JSON to false
	I0111 09:02:16.268011  764777 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13486,"bootTime":1768108650,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:02:16.268090  764777 start.go:143] virtualization:  
	I0111 09:02:16.271856  764777 out.go:179] * [old-k8s-version-931581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:02:16.276505  764777 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:02:16.276652  764777 notify.go:221] Checking for updates...
	I0111 09:02:16.283158  764777 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:02:16.286521  764777 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:02:16.289805  764777 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:02:16.292958  764777 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:02:16.296124  764777 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:02:16.299816  764777 config.go:182] Loaded profile config "force-systemd-flag-630015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:02:16.299972  764777 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:02:16.320625  764777 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:02:16.320744  764777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:02:16.390233  764777 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:02:16.380279371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:02:16.390340  764777 docker.go:319] overlay module found
	I0111 09:02:16.393453  764777 out.go:179] * Using the docker driver based on user configuration
	I0111 09:02:16.396433  764777 start.go:309] selected driver: docker
	I0111 09:02:16.396452  764777 start.go:928] validating driver "docker" against <nil>
	I0111 09:02:16.396467  764777 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:02:16.397192  764777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:02:16.453108  764777 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:02:16.443705589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:02:16.453273  764777 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 09:02:16.453495  764777 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:02:16.456567  764777 out.go:179] * Using Docker driver with root privileges
	I0111 09:02:16.459517  764777 cni.go:84] Creating CNI manager for ""
	I0111 09:02:16.459593  764777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:02:16.459606  764777 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 09:02:16.459688  764777 start.go:353] cluster config:
	{Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:02:16.464672  764777 out.go:179] * Starting "old-k8s-version-931581" primary control-plane node in "old-k8s-version-931581" cluster
	I0111 09:02:16.467655  764777 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:02:16.470504  764777 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:02:16.473332  764777 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 09:02:16.473383  764777 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:02:16.473409  764777 cache.go:65] Caching tarball of preloaded images
	I0111 09:02:16.473426  764777 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:02:16.473497  764777 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:02:16.473509  764777 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0111 09:02:16.473624  764777 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/config.json ...
	I0111 09:02:16.473641  764777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/config.json: {Name:mke2a4f8e4194724cd2c4e336b57a4ddb67a4e6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:02:16.493164  764777 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:02:16.493190  764777 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:02:16.493208  764777 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:02:16.493238  764777 start.go:360] acquireMachinesLock for old-k8s-version-931581: {Name:mkab3bc7162aba2e88171e4e683a8fd13db4db95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:02:16.493356  764777 start.go:364] duration metric: took 97.174µs to acquireMachinesLock for "old-k8s-version-931581"
	I0111 09:02:16.493390  764777 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:02:16.493477  764777 start.go:125] createHost starting for "" (driver="docker")
	I0111 09:02:16.496883  764777 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 09:02:16.497157  764777 start.go:159] libmachine.API.Create for "old-k8s-version-931581" (driver="docker")
	I0111 09:02:16.497200  764777 client.go:173] LocalClient.Create starting
	I0111 09:02:16.497282  764777 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 09:02:16.497323  764777 main.go:144] libmachine: Decoding PEM data...
	I0111 09:02:16.497343  764777 main.go:144] libmachine: Parsing certificate...
	I0111 09:02:16.497413  764777 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 09:02:16.497437  764777 main.go:144] libmachine: Decoding PEM data...
	I0111 09:02:16.497449  764777 main.go:144] libmachine: Parsing certificate...
	I0111 09:02:16.497832  764777 cli_runner.go:164] Run: docker network inspect old-k8s-version-931581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 09:02:16.514418  764777 cli_runner.go:211] docker network inspect old-k8s-version-931581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 09:02:16.514526  764777 network_create.go:284] running [docker network inspect old-k8s-version-931581] to gather additional debugging logs...
	I0111 09:02:16.514547  764777 cli_runner.go:164] Run: docker network inspect old-k8s-version-931581
	W0111 09:02:16.530340  764777 cli_runner.go:211] docker network inspect old-k8s-version-931581 returned with exit code 1
	I0111 09:02:16.530375  764777 network_create.go:287] error running [docker network inspect old-k8s-version-931581]: docker network inspect old-k8s-version-931581: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-931581 not found
	I0111 09:02:16.530390  764777 network_create.go:289] output of [docker network inspect old-k8s-version-931581]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-931581 not found
	
	** /stderr **
	I0111 09:02:16.530508  764777 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:02:16.547058  764777 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
	I0111 09:02:16.547407  764777 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461c1a9d970d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:7e:25:fe:d0:0d} reservation:<nil>}
	I0111 09:02:16.547746  764777 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38e10816f85 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:42:af:ae:32:ae} reservation:<nil>}
	I0111 09:02:16.548019  764777 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6ac2cdd04afb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:0e:43:8e:04:e3} reservation:<nil>}
	I0111 09:02:16.548460  764777 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a23cb0}
	I0111 09:02:16.548485  764777 network_create.go:124] attempt to create docker network old-k8s-version-931581 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 09:02:16.548552  764777 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-931581 old-k8s-version-931581
	I0111 09:02:16.609507  764777 network_create.go:108] docker network old-k8s-version-931581 192.168.85.0/24 created
	I0111 09:02:16.609538  764777 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-931581" container
	I0111 09:02:16.609624  764777 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 09:02:16.625490  764777 cli_runner.go:164] Run: docker volume create old-k8s-version-931581 --label name.minikube.sigs.k8s.io=old-k8s-version-931581 --label created_by.minikube.sigs.k8s.io=true
	I0111 09:02:16.644970  764777 oci.go:103] Successfully created a docker volume old-k8s-version-931581
	I0111 09:02:16.645059  764777 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-931581-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-931581 --entrypoint /usr/bin/test -v old-k8s-version-931581:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 09:02:17.161398  764777 oci.go:107] Successfully prepared a docker volume old-k8s-version-931581
	I0111 09:02:17.161473  764777 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 09:02:17.161486  764777 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 09:02:17.161554  764777 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-931581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 09:02:21.835459  764777 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-931581:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.673867909s)
	I0111 09:02:21.835514  764777 kic.go:203] duration metric: took 4.674018105s to extract preloaded images to volume ...
	W0111 09:02:21.835652  764777 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 09:02:21.835761  764777 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 09:02:21.897333  764777 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-931581 --name old-k8s-version-931581 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-931581 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-931581 --network old-k8s-version-931581 --ip 192.168.85.2 --volume old-k8s-version-931581:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 09:02:22.211787  764777 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Running}}
	I0111 09:02:22.237681  764777 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:02:22.260547  764777 cli_runner.go:164] Run: docker exec old-k8s-version-931581 stat /var/lib/dpkg/alternatives/iptables
	I0111 09:02:22.313250  764777 oci.go:144] the created container "old-k8s-version-931581" has a running status.
	I0111 09:02:22.313280  764777 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa...
	I0111 09:02:22.617717  764777 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 09:02:22.649350  764777 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:02:22.682817  764777 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 09:02:22.682837  764777 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-931581 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 09:02:22.748811  764777 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:02:22.771039  764777 machine.go:94] provisionDockerMachine start ...
	I0111 09:02:22.771130  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:02:22.793072  764777 main.go:144] libmachine: Using SSH client type: native
	I0111 09:02:22.793409  764777 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33783 <nil> <nil>}
	I0111 09:02:22.793426  764777 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:02:22.794012  764777 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49200->127.0.0.1:33783: read: connection reset by peer
	I0111 09:02:25.941924  764777 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-931581
	
	I0111 09:02:25.941952  764777 ubuntu.go:182] provisioning hostname "old-k8s-version-931581"
	I0111 09:02:25.942021  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:02:25.960083  764777 main.go:144] libmachine: Using SSH client type: native
	I0111 09:02:25.960405  764777 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33783 <nil> <nil>}
	I0111 09:02:25.960424  764777 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-931581 && echo "old-k8s-version-931581" | sudo tee /etc/hostname
	I0111 09:02:26.126117  764777 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-931581
	
	I0111 09:02:26.126268  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:02:26.145572  764777 main.go:144] libmachine: Using SSH client type: native
	I0111 09:02:26.145893  764777 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33783 <nil> <nil>}
	I0111 09:02:26.145909  764777 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-931581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-931581/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-931581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:02:26.294290  764777 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:02:26.294318  764777 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:02:26.294341  764777 ubuntu.go:190] setting up certificates
	I0111 09:02:26.294351  764777 provision.go:84] configureAuth start
	I0111 09:02:26.294420  764777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-931581
	I0111 09:02:26.312237  764777 provision.go:143] copyHostCerts
	I0111 09:02:26.312313  764777 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:02:26.312324  764777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:02:26.312407  764777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:02:26.312506  764777 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:02:26.312515  764777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:02:26.312541  764777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:02:26.312624  764777 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:02:26.312635  764777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:02:26.312668  764777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:02:26.312719  764777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-931581 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-931581]
	I0111 09:02:26.651153  764777 provision.go:177] copyRemoteCerts
	I0111 09:02:26.651218  764777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:02:26.651264  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:02:26.668378  764777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:02:26.774325  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:02:26.792618  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0111 09:02:26.811989  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 09:02:26.831440  764777 provision.go:87] duration metric: took 537.066427ms to configureAuth
	I0111 09:02:26.831470  764777 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:02:26.831666  764777 config.go:182] Loaded profile config "old-k8s-version-931581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 09:02:26.831781  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:02:26.852398  764777 main.go:144] libmachine: Using SSH client type: native
	I0111 09:02:26.852720  764777 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33783 <nil> <nil>}
	I0111 09:02:26.852740  764777 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:02:27.166816  764777 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:02:27.166838  764777 machine.go:97] duration metric: took 4.395775578s to provisionDockerMachine
	I0111 09:02:27.166849  764777 client.go:176] duration metric: took 10.669642077s to LocalClient.Create
	I0111 09:02:27.166864  764777 start.go:167] duration metric: took 10.669709384s to libmachine.API.Create "old-k8s-version-931581"
	I0111 09:02:27.166872  764777 start.go:293] postStartSetup for "old-k8s-version-931581" (driver="docker")
	I0111 09:02:27.166904  764777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:02:27.166978  764777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:02:27.167022  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:02:27.191946  764777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:02:27.302195  764777 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:02:27.305527  764777 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:02:27.305562  764777 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:02:27.305574  764777 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:02:27.305635  764777 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:02:27.305715  764777 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:02:27.305821  764777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:02:27.313308  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:02:27.332592  764777 start.go:296] duration metric: took 165.699199ms for postStartSetup
	I0111 09:02:27.333029  764777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-931581
	I0111 09:02:27.350284  764777 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/config.json ...
	I0111 09:02:27.350675  764777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:02:27.350733  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:02:27.367394  764777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:02:27.467525  764777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:02:27.472597  764777 start.go:128] duration metric: took 10.979103607s to createHost
	I0111 09:02:27.472625  764777 start.go:83] releasing machines lock for "old-k8s-version-931581", held for 10.979254395s
	I0111 09:02:27.472701  764777 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-931581
	I0111 09:02:27.489539  764777 ssh_runner.go:195] Run: cat /version.json
	I0111 09:02:27.489597  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:02:27.489871  764777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:02:27.489933  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:02:27.512689  764777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:02:27.520200  764777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:02:27.618012  764777 ssh_runner.go:195] Run: systemctl --version
	I0111 09:02:27.720160  764777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:02:27.757909  764777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:02:27.762465  764777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:02:27.762553  764777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:02:27.791626  764777 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 09:02:27.791655  764777 start.go:496] detecting cgroup driver to use...
	I0111 09:02:27.791691  764777 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:02:27.791750  764777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:02:27.810026  764777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:02:27.823431  764777 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:02:27.823519  764777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:02:27.841596  764777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:02:27.861477  764777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:02:28.000057  764777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:02:28.136114  764777 docker.go:234] disabling docker service ...
	I0111 09:02:28.136232  764777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:02:28.158827  764777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:02:28.172689  764777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:02:28.299124  764777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:02:28.417316  764777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:02:28.431326  764777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:02:28.446885  764777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0111 09:02:28.446968  764777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:02:28.456217  764777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:02:28.456299  764777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:02:28.465576  764777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:02:28.475031  764777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:02:28.484521  764777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:02:28.493342  764777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:02:28.503294  764777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:02:28.517996  764777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:02:28.528982  764777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:02:28.537535  764777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:02:28.545701  764777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:02:28.660833  764777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:02:28.846382  764777 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:02:28.846458  764777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:02:28.850518  764777 start.go:574] Will wait 60s for crictl version
	I0111 09:02:28.850625  764777 ssh_runner.go:195] Run: which crictl
	I0111 09:02:28.854065  764777 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:02:28.878879  764777 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:02:28.879004  764777 ssh_runner.go:195] Run: crio --version
	I0111 09:02:28.906859  764777 ssh_runner.go:195] Run: crio --version
	I0111 09:02:28.940278  764777 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I0111 09:02:28.943056  764777 cli_runner.go:164] Run: docker network inspect old-k8s-version-931581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:02:28.959775  764777 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:02:28.963694  764777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:02:28.973392  764777 kubeadm.go:884] updating cluster {Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:02:28.973525  764777 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 09:02:28.973590  764777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:02:29.006917  764777 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:02:29.006944  764777 crio.go:433] Images already preloaded, skipping extraction
	I0111 09:02:29.007011  764777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:02:29.033673  764777 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:02:29.033706  764777 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:02:29.033740  764777 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I0111 09:02:29.033842  764777 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-931581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:02:29.033925  764777 ssh_runner.go:195] Run: crio config
	I0111 09:02:29.089367  764777 cni.go:84] Creating CNI manager for ""
	I0111 09:02:29.089397  764777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:02:29.089416  764777 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:02:29.089439  764777 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-931581 NodeName:old-k8s-version-931581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:02:29.089577  764777 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-931581"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:02:29.089652  764777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0111 09:02:29.097912  764777 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:02:29.098030  764777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:02:29.105925  764777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0111 09:02:29.118836  764777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:02:29.133237  764777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0111 09:02:29.147146  764777 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:02:29.150950  764777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:02:29.160546  764777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:02:29.286393  764777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:02:29.303888  764777 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581 for IP: 192.168.85.2
	I0111 09:02:29.303915  764777 certs.go:195] generating shared ca certs ...
	I0111 09:02:29.303932  764777 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:02:29.304071  764777 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:02:29.304121  764777 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:02:29.304133  764777 certs.go:257] generating profile certs ...
	I0111 09:02:29.304188  764777 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.key
	I0111 09:02:29.304215  764777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt with IP's: []
	I0111 09:02:29.843820  764777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt ...
	I0111 09:02:29.843857  764777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: {Name:mkd94c7665ccda61162d00035ae8d26fdb0f4384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:02:29.844066  764777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.key ...
	I0111 09:02:29.844083  764777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.key: {Name:mk2903f676dea39877128722eedfe2acd1f79bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:02:29.844188  764777 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key.eb6f276c
	I0111 09:02:29.844206  764777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.crt.eb6f276c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 09:02:30.028798  764777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.crt.eb6f276c ...
	I0111 09:02:30.028837  764777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.crt.eb6f276c: {Name:mkb051d2a5c06a8599f7bf2193c0871b74b477f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:02:30.029047  764777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key.eb6f276c ...
	I0111 09:02:30.029057  764777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key.eb6f276c: {Name:mkb9b89198b44fcbdf2895f04a8eca19e60083f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:02:30.029134  764777 certs.go:382] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.crt.eb6f276c -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.crt
	I0111 09:02:30.029220  764777 certs.go:386] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key.eb6f276c -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key
	I0111 09:02:30.029275  764777 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.key
	I0111 09:02:30.029288  764777 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.crt with IP's: []
	I0111 09:02:30.169976  764777 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.crt ...
	I0111 09:02:30.170015  764777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.crt: {Name:mk15ba832970cfa0f7468e5e8939c8ea44dc1758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:02:30.170274  764777 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.key ...
	I0111 09:02:30.170292  764777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.key: {Name:mk6a84d7fcf7879a4a11ccbe3898ad6375e25f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:02:30.170485  764777 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:02:30.170550  764777 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:02:30.170576  764777 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:02:30.170612  764777 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:02:30.170642  764777 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:02:30.170672  764777 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:02:30.170722  764777 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:02:30.171367  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:02:30.222744  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:02:30.251855  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:02:30.286953  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:02:30.305652  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0111 09:02:30.325032  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 09:02:30.343872  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:02:30.362315  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 09:02:30.381166  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:02:30.399367  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:02:30.417764  764777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:02:30.436373  764777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:02:30.450492  764777 ssh_runner.go:195] Run: openssl version
	I0111 09:02:30.457027  764777 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:02:30.464999  764777 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:02:30.472732  764777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:02:30.477092  764777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:02:30.477181  764777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:02:30.518549  764777 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:02:30.526187  764777 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 09:02:30.533630  764777 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:02:30.541486  764777 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:02:30.549067  764777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:02:30.553167  764777 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:02:30.553237  764777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:02:30.594362  764777 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:02:30.601907  764777 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/576907.pem /etc/ssl/certs/51391683.0
	I0111 09:02:30.609244  764777 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:02:30.616379  764777 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:02:30.623819  764777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:02:30.627924  764777 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:02:30.628043  764777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:02:30.669183  764777 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:02:30.677003  764777 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5769072.pem /etc/ssl/certs/3ec20f2e.0
	I0111 09:02:30.684387  764777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:02:30.687971  764777 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 09:02:30.688025  764777 kubeadm.go:401] StartCluster: {Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:02:30.688102  764777 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:02:30.688183  764777 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:02:30.715278  764777 cri.go:96] found id: ""
	I0111 09:02:30.715401  764777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:02:30.723277  764777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 09:02:30.731440  764777 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 09:02:30.731538  764777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 09:02:30.739348  764777 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 09:02:30.739381  764777 kubeadm.go:158] found existing configuration files:
	
	I0111 09:02:30.739451  764777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 09:02:30.747361  764777 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 09:02:30.747485  764777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 09:02:30.754978  764777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 09:02:30.762617  764777 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 09:02:30.762697  764777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 09:02:30.770423  764777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 09:02:30.778231  764777 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 09:02:30.778302  764777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 09:02:30.785472  764777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 09:02:30.793250  764777 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 09:02:30.793319  764777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 09:02:30.800747  764777 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 09:02:30.851028  764777 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I0111 09:02:30.851093  764777 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 09:02:30.885882  764777 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 09:02:30.885961  764777 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 09:02:30.886005  764777 kubeadm.go:319] OS: Linux
	I0111 09:02:30.886055  764777 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 09:02:30.886107  764777 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 09:02:30.886177  764777 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 09:02:30.886230  764777 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 09:02:30.886281  764777 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 09:02:30.886348  764777 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 09:02:30.886397  764777 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 09:02:30.886449  764777 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 09:02:30.886506  764777 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 09:02:31.011978  764777 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 09:02:31.012093  764777 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 09:02:31.012193  764777 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0111 09:02:31.181950  764777 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 09:02:31.188622  764777 out.go:252]   - Generating certificates and keys ...
	I0111 09:02:31.188733  764777 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 09:02:31.188804  764777 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 09:02:31.637444  764777 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 09:02:31.878037  764777 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 09:02:32.387055  764777 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 09:02:33.838965  764777 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 09:02:34.087682  764777 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 09:02:34.088028  764777 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-931581] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:02:34.475727  764777 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 09:02:34.476065  764777 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-931581] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:02:34.798107  764777 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 09:02:35.262295  764777 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 09:02:35.693215  764777 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 09:02:35.693495  764777 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 09:02:36.329085  764777 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 09:02:36.527599  764777 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 09:02:37.044521  764777 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 09:02:37.684387  764777 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 09:02:37.685723  764777 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 09:02:37.689751  764777 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 09:02:37.693351  764777 out.go:252]   - Booting up control plane ...
	I0111 09:02:37.693484  764777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 09:02:37.693576  764777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 09:02:37.714552  764777 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 09:02:37.739860  764777 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 09:02:37.739991  764777 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 09:02:37.740036  764777 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:02:37.881602  764777 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0111 09:02:45.884598  764777 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.006395 seconds
	I0111 09:02:45.884723  764777 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 09:02:45.902762  764777 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 09:02:46.432160  764777 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 09:02:46.432381  764777 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-931581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 09:02:46.953498  764777 kubeadm.go:319] [bootstrap-token] Using token: 4lj5sb.sosvpu8sagxeakmn
	I0111 09:02:46.959038  764777 out.go:252]   - Configuring RBAC rules ...
	I0111 09:02:46.959175  764777 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 09:02:46.959523  764777 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 09:02:46.978425  764777 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 09:02:46.990327  764777 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 09:02:46.999457  764777 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 09:02:47.004627  764777 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 09:02:47.029522  764777 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 09:02:47.314865  764777 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 09:02:47.365759  764777 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 09:02:47.367141  764777 kubeadm.go:319] 
	I0111 09:02:47.367215  764777 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 09:02:47.367220  764777 kubeadm.go:319] 
	I0111 09:02:47.367327  764777 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 09:02:47.367333  764777 kubeadm.go:319] 
	I0111 09:02:47.367358  764777 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 09:02:47.367418  764777 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 09:02:47.367468  764777 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 09:02:47.367472  764777 kubeadm.go:319] 
	I0111 09:02:47.367526  764777 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 09:02:47.367529  764777 kubeadm.go:319] 
	I0111 09:02:47.367577  764777 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 09:02:47.367581  764777 kubeadm.go:319] 
	I0111 09:02:47.367633  764777 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 09:02:47.367711  764777 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 09:02:47.367780  764777 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 09:02:47.367783  764777 kubeadm.go:319] 
	I0111 09:02:47.367878  764777 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 09:02:47.367956  764777 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 09:02:47.367960  764777 kubeadm.go:319] 
	I0111 09:02:47.368044  764777 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4lj5sb.sosvpu8sagxeakmn \
	I0111 09:02:47.368147  764777 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb \
	I0111 09:02:47.368167  764777 kubeadm.go:319] 	--control-plane 
	I0111 09:02:47.368171  764777 kubeadm.go:319] 
	I0111 09:02:47.368256  764777 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 09:02:47.368260  764777 kubeadm.go:319] 
	I0111 09:02:47.368341  764777 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4lj5sb.sosvpu8sagxeakmn \
	I0111 09:02:47.368448  764777 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb 
	I0111 09:02:47.371611  764777 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:02:47.371722  764777 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:02:47.371739  764777 cni.go:84] Creating CNI manager for ""
	I0111 09:02:47.371750  764777 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:02:47.374898  764777 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 09:02:47.377735  764777 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 09:02:47.387213  764777 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I0111 09:02:47.387238  764777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 09:02:47.411194  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 09:02:48.467712  764777 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.056471061s)
	I0111 09:02:48.467757  764777 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 09:02:48.467866  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:48.467894  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-931581 minikube.k8s.io/updated_at=2026_01_11T09_02_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=old-k8s-version-931581 minikube.k8s.io/primary=true
	I0111 09:02:48.627527  764777 ops.go:34] apiserver oom_adj: -16
	I0111 09:02:48.627733  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:49.128321  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:49.628693  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:50.128753  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:50.628703  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:51.128606  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:51.627879  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:52.127917  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:52.628829  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:53.127865  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:53.627848  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:54.127971  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:54.628113  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:55.128174  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:55.628716  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:56.128469  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:56.628393  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:57.128510  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:57.628048  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:58.128753  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:58.628068  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:59.127759  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:02:59.628446  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:03:00.154781  764777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:03:00.533547  764777 kubeadm.go:1114] duration metric: took 12.065738253s to wait for elevateKubeSystemPrivileges
	I0111 09:03:00.533578  764777 kubeadm.go:403] duration metric: took 29.845556791s to StartCluster
	I0111 09:03:00.533596  764777 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:00.533680  764777 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:03:00.534447  764777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:00.534893  764777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 09:03:00.535251  764777 config.go:182] Loaded profile config "old-k8s-version-931581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 09:03:00.535365  764777 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:03:00.535419  764777 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:03:00.536118  764777 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-931581"
	I0111 09:03:00.536152  764777 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-931581"
	I0111 09:03:00.536129  764777 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-931581"
	I0111 09:03:00.536277  764777 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-931581"
	I0111 09:03:00.536206  764777 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:03:00.536679  764777 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:00.537040  764777 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:00.546202  764777 out.go:179] * Verifying Kubernetes components...
	I0111 09:03:00.549303  764777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:03:00.579521  764777 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-931581"
	I0111 09:03:00.579567  764777 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:03:00.580011  764777 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:00.593478  764777 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:03:00.596484  764777 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:03:00.596509  764777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:03:00.596580  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:00.619008  764777 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:03:00.619041  764777 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:03:00.619110  764777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:00.642549  764777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:00.665844  764777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:00.877833  764777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:03:00.952571  764777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 09:03:00.952685  764777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:03:00.963691  764777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:03:01.490692  764777 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0111 09:03:01.491411  764777 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-931581" to be "Ready" ...
	I0111 09:03:01.808233  764777 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0111 09:03:01.810999  764777 addons.go:530] duration metric: took 1.275577705s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0111 09:03:01.994384  764777 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-931581" context rescaled to 1 replicas
	W0111 09:03:03.497778  764777 node_ready.go:57] node "old-k8s-version-931581" has "Ready":"False" status (will retry)
	W0111 09:03:05.994930  764777 node_ready.go:57] node "old-k8s-version-931581" has "Ready":"False" status (will retry)
	W0111 09:03:07.995298  764777 node_ready.go:57] node "old-k8s-version-931581" has "Ready":"False" status (will retry)
	W0111 09:03:10.495562  764777 node_ready.go:57] node "old-k8s-version-931581" has "Ready":"False" status (will retry)
	W0111 09:03:12.994318  764777 node_ready.go:57] node "old-k8s-version-931581" has "Ready":"False" status (will retry)
	I0111 09:03:14.994779  764777 node_ready.go:49] node "old-k8s-version-931581" is "Ready"
	I0111 09:03:14.994811  764777 node_ready.go:38] duration metric: took 13.503381466s for node "old-k8s-version-931581" to be "Ready" ...
	I0111 09:03:14.994825  764777 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:03:14.994887  764777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:03:15.010241  764777 api_server.go:72] duration metric: took 14.474614125s to wait for apiserver process to appear ...
	I0111 09:03:15.010271  764777 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:03:15.010294  764777 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:03:15.020531  764777 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 09:03:15.024428  764777 api_server.go:141] control plane version: v1.28.0
	I0111 09:03:15.024458  764777 api_server.go:131] duration metric: took 14.179589ms to wait for apiserver health ...
	I0111 09:03:15.024469  764777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:03:15.029725  764777 system_pods.go:59] 8 kube-system pods found
	I0111 09:03:15.029763  764777 system_pods.go:61] "coredns-5dd5756b68-2gkt5" [fed76c30-7304-4890-9b21-67f48729cb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:03:15.029771  764777 system_pods.go:61] "etcd-old-k8s-version-931581" [2846557f-ef29-426a-9620-d7182b3d2e5c] Running
	I0111 09:03:15.029779  764777 system_pods.go:61] "kindnet-vl8hm" [1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3] Running
	I0111 09:03:15.029783  764777 system_pods.go:61] "kube-apiserver-old-k8s-version-931581" [8a8af346-ef92-4b59-9a35-5bcfa837543f] Running
	I0111 09:03:15.029788  764777 system_pods.go:61] "kube-controller-manager-old-k8s-version-931581" [9cd07045-f552-41ef-8ea6-c2584ba61279] Running
	I0111 09:03:15.029792  764777 system_pods.go:61] "kube-proxy-xg9bv" [489cf8f4-64d7-44c0-b233-c8235d397932] Running
	I0111 09:03:15.029797  764777 system_pods.go:61] "kube-scheduler-old-k8s-version-931581" [8feb9a4e-b8ad-473e-ac05-2ce9ed02a7d6] Running
	I0111 09:03:15.029804  764777 system_pods.go:61] "storage-provisioner" [d7c7d49d-3c49-49aa-97c5-9692e0c23d99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:03:15.029810  764777 system_pods.go:74] duration metric: took 5.334854ms to wait for pod list to return data ...
	I0111 09:03:15.029820  764777 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:03:15.034578  764777 default_sa.go:45] found service account: "default"
	I0111 09:03:15.034622  764777 default_sa.go:55] duration metric: took 4.795336ms for default service account to be created ...
	I0111 09:03:15.034635  764777 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:03:15.039496  764777 system_pods.go:86] 8 kube-system pods found
	I0111 09:03:15.039524  764777 system_pods.go:89] "coredns-5dd5756b68-2gkt5" [fed76c30-7304-4890-9b21-67f48729cb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:03:15.039534  764777 system_pods.go:89] "etcd-old-k8s-version-931581" [2846557f-ef29-426a-9620-d7182b3d2e5c] Running
	I0111 09:03:15.039543  764777 system_pods.go:89] "kindnet-vl8hm" [1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3] Running
	I0111 09:03:15.039549  764777 system_pods.go:89] "kube-apiserver-old-k8s-version-931581" [8a8af346-ef92-4b59-9a35-5bcfa837543f] Running
	I0111 09:03:15.039554  764777 system_pods.go:89] "kube-controller-manager-old-k8s-version-931581" [9cd07045-f552-41ef-8ea6-c2584ba61279] Running
	I0111 09:03:15.039558  764777 system_pods.go:89] "kube-proxy-xg9bv" [489cf8f4-64d7-44c0-b233-c8235d397932] Running
	I0111 09:03:15.039563  764777 system_pods.go:89] "kube-scheduler-old-k8s-version-931581" [8feb9a4e-b8ad-473e-ac05-2ce9ed02a7d6] Running
	I0111 09:03:15.039570  764777 system_pods.go:89] "storage-provisioner" [d7c7d49d-3c49-49aa-97c5-9692e0c23d99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:03:15.039601  764777 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0111 09:03:15.346917  764777 system_pods.go:86] 8 kube-system pods found
	I0111 09:03:15.346955  764777 system_pods.go:89] "coredns-5dd5756b68-2gkt5" [fed76c30-7304-4890-9b21-67f48729cb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:03:15.346962  764777 system_pods.go:89] "etcd-old-k8s-version-931581" [2846557f-ef29-426a-9620-d7182b3d2e5c] Running
	I0111 09:03:15.346970  764777 system_pods.go:89] "kindnet-vl8hm" [1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3] Running
	I0111 09:03:15.346975  764777 system_pods.go:89] "kube-apiserver-old-k8s-version-931581" [8a8af346-ef92-4b59-9a35-5bcfa837543f] Running
	I0111 09:03:15.346981  764777 system_pods.go:89] "kube-controller-manager-old-k8s-version-931581" [9cd07045-f552-41ef-8ea6-c2584ba61279] Running
	I0111 09:03:15.346985  764777 system_pods.go:89] "kube-proxy-xg9bv" [489cf8f4-64d7-44c0-b233-c8235d397932] Running
	I0111 09:03:15.346989  764777 system_pods.go:89] "kube-scheduler-old-k8s-version-931581" [8feb9a4e-b8ad-473e-ac05-2ce9ed02a7d6] Running
	I0111 09:03:15.346995  764777 system_pods.go:89] "storage-provisioner" [d7c7d49d-3c49-49aa-97c5-9692e0c23d99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:03:15.625233  764777 system_pods.go:86] 8 kube-system pods found
	I0111 09:03:15.625276  764777 system_pods.go:89] "coredns-5dd5756b68-2gkt5" [fed76c30-7304-4890-9b21-67f48729cb7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:03:15.625285  764777 system_pods.go:89] "etcd-old-k8s-version-931581" [2846557f-ef29-426a-9620-d7182b3d2e5c] Running
	I0111 09:03:15.625292  764777 system_pods.go:89] "kindnet-vl8hm" [1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3] Running
	I0111 09:03:15.625297  764777 system_pods.go:89] "kube-apiserver-old-k8s-version-931581" [8a8af346-ef92-4b59-9a35-5bcfa837543f] Running
	I0111 09:03:15.625302  764777 system_pods.go:89] "kube-controller-manager-old-k8s-version-931581" [9cd07045-f552-41ef-8ea6-c2584ba61279] Running
	I0111 09:03:15.625338  764777 system_pods.go:89] "kube-proxy-xg9bv" [489cf8f4-64d7-44c0-b233-c8235d397932] Running
	I0111 09:03:15.625349  764777 system_pods.go:89] "kube-scheduler-old-k8s-version-931581" [8feb9a4e-b8ad-473e-ac05-2ce9ed02a7d6] Running
	I0111 09:03:15.625356  764777 system_pods.go:89] "storage-provisioner" [d7c7d49d-3c49-49aa-97c5-9692e0c23d99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:03:16.037136  764777 system_pods.go:86] 8 kube-system pods found
	I0111 09:03:16.037177  764777 system_pods.go:89] "coredns-5dd5756b68-2gkt5" [fed76c30-7304-4890-9b21-67f48729cb7f] Running
	I0111 09:03:16.037186  764777 system_pods.go:89] "etcd-old-k8s-version-931581" [2846557f-ef29-426a-9620-d7182b3d2e5c] Running
	I0111 09:03:16.037190  764777 system_pods.go:89] "kindnet-vl8hm" [1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3] Running
	I0111 09:03:16.037196  764777 system_pods.go:89] "kube-apiserver-old-k8s-version-931581" [8a8af346-ef92-4b59-9a35-5bcfa837543f] Running
	I0111 09:03:16.037201  764777 system_pods.go:89] "kube-controller-manager-old-k8s-version-931581" [9cd07045-f552-41ef-8ea6-c2584ba61279] Running
	I0111 09:03:16.037206  764777 system_pods.go:89] "kube-proxy-xg9bv" [489cf8f4-64d7-44c0-b233-c8235d397932] Running
	I0111 09:03:16.037210  764777 system_pods.go:89] "kube-scheduler-old-k8s-version-931581" [8feb9a4e-b8ad-473e-ac05-2ce9ed02a7d6] Running
	I0111 09:03:16.037215  764777 system_pods.go:89] "storage-provisioner" [d7c7d49d-3c49-49aa-97c5-9692e0c23d99] Running
	I0111 09:03:16.037223  764777 system_pods.go:126] duration metric: took 1.002582811s to wait for k8s-apps to be running ...
	I0111 09:03:16.037235  764777 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:03:16.037299  764777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:03:16.051251  764777 system_svc.go:56] duration metric: took 14.005753ms WaitForService to wait for kubelet
	I0111 09:03:16.051285  764777 kubeadm.go:587] duration metric: took 15.515665578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:03:16.051304  764777 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:03:16.054550  764777 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:03:16.054596  764777 node_conditions.go:123] node cpu capacity is 2
	I0111 09:03:16.054611  764777 node_conditions.go:105] duration metric: took 3.300531ms to run NodePressure ...
	I0111 09:03:16.054624  764777 start.go:242] waiting for startup goroutines ...
	I0111 09:03:16.054683  764777 start.go:247] waiting for cluster config update ...
	I0111 09:03:16.054703  764777 start.go:256] writing updated cluster config ...
	I0111 09:03:16.055027  764777 ssh_runner.go:195] Run: rm -f paused
	I0111 09:03:16.058930  764777 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:03:16.063213  764777 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-2gkt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:16.069070  764777 pod_ready.go:94] pod "coredns-5dd5756b68-2gkt5" is "Ready"
	I0111 09:03:16.069098  764777 pod_ready.go:86] duration metric: took 5.786209ms for pod "coredns-5dd5756b68-2gkt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:16.072485  764777 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:16.077862  764777 pod_ready.go:94] pod "etcd-old-k8s-version-931581" is "Ready"
	I0111 09:03:16.077894  764777 pod_ready.go:86] duration metric: took 5.384275ms for pod "etcd-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:16.081168  764777 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:16.090864  764777 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-931581" is "Ready"
	I0111 09:03:16.090945  764777 pod_ready.go:86] duration metric: took 9.745119ms for pod "kube-apiserver-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:16.094459  764777 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:16.463184  764777 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-931581" is "Ready"
	I0111 09:03:16.463219  764777 pod_ready.go:86] duration metric: took 368.734005ms for pod "kube-controller-manager-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:16.665010  764777 pod_ready.go:83] waiting for pod "kube-proxy-xg9bv" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:17.063297  764777 pod_ready.go:94] pod "kube-proxy-xg9bv" is "Ready"
	I0111 09:03:17.063324  764777 pod_ready.go:86] duration metric: took 398.217094ms for pod "kube-proxy-xg9bv" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:17.264323  764777 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:17.663329  764777 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-931581" is "Ready"
	I0111 09:03:17.663357  764777 pod_ready.go:86] duration metric: took 399.007357ms for pod "kube-scheduler-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:03:17.663370  764777 pod_ready.go:40] duration metric: took 1.604405947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:03:17.748465  764777 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I0111 09:03:17.751674  764777 out.go:203] 
	W0111 09:03:17.754690  764777 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I0111 09:03:17.757770  764777 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:03:17.760625  764777 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-931581" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 09:03:15 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:15.290568343Z" level=info msg="Created container c8c830700f9c78116515dffb020cc2e212bc6c1ad43d260c67b210a033f6901b: kube-system/coredns-5dd5756b68-2gkt5/coredns" id=e11ed94e-b90a-4148-bf59-06020ad4e4eb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:03:15 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:15.291671225Z" level=info msg="Starting container: c8c830700f9c78116515dffb020cc2e212bc6c1ad43d260c67b210a033f6901b" id=f591f66d-d369-40a4-b236-d09240c01070 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:03:15 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:15.296393182Z" level=info msg="Started container" PID=1962 containerID=c8c830700f9c78116515dffb020cc2e212bc6c1ad43d260c67b210a033f6901b description=kube-system/coredns-5dd5756b68-2gkt5/coredns id=f591f66d-d369-40a4-b236-d09240c01070 name=/runtime.v1.RuntimeService/StartContainer sandboxID=867431094ee33185bf2d2c2961c81a6b79c8b54558ef3f43d39fd598f18733f5
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.275797207Z" level=info msg="Running pod sandbox: default/busybox/POD" id=770f389e-ab4d-494a-9201-7902d12f07ca name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.275912089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.28292196Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e57a0ccaad97c637b9c8d6550f4cb88d9a539beced0c8e30ac1108f5df1bc9b UID:0d413a31-5797-4ca1-95a0-a108b606a94b NetNS:/var/run/netns/c066ba37-163b-4d0a-8775-e03fb95fdb45 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000fc3660}] Aliases:map[]}"
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.283098955Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.296267651Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e57a0ccaad97c637b9c8d6550f4cb88d9a539beced0c8e30ac1108f5df1bc9b UID:0d413a31-5797-4ca1-95a0-a108b606a94b NetNS:/var/run/netns/c066ba37-163b-4d0a-8775-e03fb95fdb45 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000fc3660}] Aliases:map[]}"
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.296577628Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.30130701Z" level=info msg="Ran pod sandbox 5e57a0ccaad97c637b9c8d6550f4cb88d9a539beced0c8e30ac1108f5df1bc9b with infra container: default/busybox/POD" id=770f389e-ab4d-494a-9201-7902d12f07ca name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.30246737Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=95a79919-212a-497e-b142-add293c0fb98 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.302595642Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=95a79919-212a-497e-b142-add293c0fb98 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.302682257Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=95a79919-212a-497e-b142-add293c0fb98 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.303304813Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=93e899b3-c47c-4dac-bbd9-047349030495 name=/runtime.v1.ImageService/PullImage
	Jan 11 09:03:18 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:18.303627877Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 11 09:03:20 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:20.496906473Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=93e899b3-c47c-4dac-bbd9-047349030495 name=/runtime.v1.ImageService/PullImage
	Jan 11 09:03:20 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:20.498353252Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dff72bd3-031e-4e9b-96d5-84a57245671c name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:03:20 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:20.500084023Z" level=info msg="Creating container: default/busybox/busybox" id=e3f3ac81-3d99-4f62-8f47-481e19a33b0e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:03:20 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:20.500395781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:03:20 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:20.505543869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:03:20 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:20.506542413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:03:20 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:20.527854426Z" level=info msg="Created container 705704bea552446cd6f5da8f373a31edeaadfceb09b7a3557150cbe170835003: default/busybox/busybox" id=e3f3ac81-3d99-4f62-8f47-481e19a33b0e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:03:20 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:20.530862268Z" level=info msg="Starting container: 705704bea552446cd6f5da8f373a31edeaadfceb09b7a3557150cbe170835003" id=8bee7af4-a94a-4709-bb26-3bc7f0e1ebf3 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:03:20 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:20.532550249Z" level=info msg="Started container" PID=2021 containerID=705704bea552446cd6f5da8f373a31edeaadfceb09b7a3557150cbe170835003 description=default/busybox/busybox id=8bee7af4-a94a-4709-bb26-3bc7f0e1ebf3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e57a0ccaad97c637b9c8d6550f4cb88d9a539beced0c8e30ac1108f5df1bc9b
	Jan 11 09:03:27 old-k8s-version-931581 crio[841]: time="2026-01-11T09:03:27.157133607Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	705704bea5524       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   5e57a0ccaad97       busybox                                          default
	c8c830700f9c7       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   867431094ee33       coredns-5dd5756b68-2gkt5                         kube-system
	5e12578701d39       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   032517f1fe507       storage-provisioner                              kube-system
	21da90fde7c04       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   ebd525c8742a5       kindnet-vl8hm                                    kube-system
	0117f21d5f790       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   818951fcd382d       kube-proxy-xg9bv                                 kube-system
	83b0b1042486e       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      49 seconds ago      Running             kube-scheduler            0                   36439523bcc27       kube-scheduler-old-k8s-version-931581            kube-system
	1b2eb4796ec32       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      49 seconds ago      Running             kube-apiserver            0                   02aca5677c9ef       kube-apiserver-old-k8s-version-931581            kube-system
	6e0c2432d7d62       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      49 seconds ago      Running             etcd                      0                   99298db3ea06e       etcd-old-k8s-version-931581                      kube-system
	163355b6abcb3       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      49 seconds ago      Running             kube-controller-manager   0                   6075abb09e3fe       kube-controller-manager-old-k8s-version-931581   kube-system
	
	
	==> coredns [c8c830700f9c78116515dffb020cc2e212bc6c1ad43d260c67b210a033f6901b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56004 - 42920 "HINFO IN 6726046730615269220.569448175012532507. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004706145s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-931581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-931581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=old-k8s-version-931581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_02_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:02:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-931581
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:03:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:03:18 +0000   Sun, 11 Jan 2026 09:02:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:03:18 +0000   Sun, 11 Jan 2026 09:02:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:03:18 +0000   Sun, 11 Jan 2026 09:02:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:03:18 +0000   Sun, 11 Jan 2026 09:03:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-931581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                af69ca9e-bf38-4107-aa6e-3001379de44e
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-2gkt5                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-931581                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-vl8hm                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-931581             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-931581    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-xg9bv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-931581             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-931581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-931581 event: Registered Node old-k8s-version-931581 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-931581 status is now: NodeReady
	
	
	==> dmesg <==
	[  +3.600210] overlayfs: idmapped layers are currently not supported
	[Jan11 08:30] overlayfs: idmapped layers are currently not supported
	[Jan11 08:31] overlayfs: idmapped layers are currently not supported
	[Jan11 08:32] overlayfs: idmapped layers are currently not supported
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6e0c2432d7d62e88959df8723894e52caddcd31bdb339c553eacfc15ebfb577b] <==
	{"level":"info","ts":"2026-01-11T09:02:39.679757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-11T09:02:39.679873Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2026-01-11T09:02:39.680293Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-11T09:02:39.680604Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T09:02:39.68043Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:02:39.681151Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:02:39.681076Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T09:02:40.346177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-11T09:02:40.346294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T09:02:40.346349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-11T09:02:40.346388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:02:40.346422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:02:40.346467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-11T09:02:40.346497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:02:40.35032Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-931581 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:02:40.350401Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:02:40.351424Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:02:40.362279Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T09:02:40.368295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:02:40.369312Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T09:02:40.369483Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:02:40.369521Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:02:40.368293Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T09:02:40.369636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T09:02:40.369688Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 09:03:28 up  3:45,  0 user,  load average: 1.56, 1.42, 1.90
	Linux old-k8s-version-931581 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [21da90fde7c04d63c4ad16c0ddf26e0cddfc90fbb1ae2b58fbe4206090cab4b7] <==
	I0111 09:03:04.547592       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:03:04.547821       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:03:04.547937       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:03:04.547954       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:03:04.547965       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:03:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:03:04.839210       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:03:04.847561       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:03:04.847606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:03:04.848083       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 09:03:05.038383       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:03:05.038482       1 metrics.go:72] Registering metrics
	I0111 09:03:05.038585       1 controller.go:711] "Syncing nftables rules"
	I0111 09:03:14.755028       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:03:14.755071       1 main.go:301] handling current node
	I0111 09:03:24.751965       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:03:24.752034       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1b2eb4796ec32f519571fa0bb08055c3b0f5cf81566fda8008229303300e5933] <==
	I0111 09:02:43.907294       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 09:02:43.907329       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0111 09:02:43.907653       1 aggregator.go:166] initial CRD sync complete...
	I0111 09:02:43.907673       1 autoregister_controller.go:141] Starting autoregister controller
	I0111 09:02:43.907677       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 09:02:43.907683       1 cache.go:39] Caches are synced for autoregister controller
	I0111 09:02:43.909233       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0111 09:02:43.924494       1 controller.go:624] quota admission added evaluator for: namespaces
	I0111 09:02:43.938188       1 shared_informer.go:318] Caches are synced for configmaps
	I0111 09:02:43.970821       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:02:44.637128       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0111 09:02:44.642474       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0111 09:02:44.642512       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0111 09:02:45.470464       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:02:45.525134       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:02:45.649879       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 09:02:45.659559       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0111 09:02:45.660837       1 controller.go:624] quota admission added evaluator for: endpoints
	I0111 09:02:45.665955       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:02:45.876230       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0111 09:02:47.296116       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0111 09:02:47.313196       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 09:02:47.324997       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0111 09:03:00.698049       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0111 09:03:00.781350       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [163355b6abcb3910fd2005c58ab2c5729e5dfdf6fff0d79e5a899379acffd38e] <==
	I0111 09:03:00.285859       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-931581" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0111 09:03:00.285946       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-931581" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0111 09:03:00.415099       1 event.go:307] "Event occurred" object="kube-system/etcd-old-k8s-version-931581" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0111 09:03:00.481061       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 09:03:00.514909       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 09:03:00.514944       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0111 09:03:00.719983       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0111 09:03:00.866973       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vl8hm"
	I0111 09:03:00.878646       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xg9bv"
	I0111 09:03:00.878683       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-zl2hj"
	I0111 09:03:00.939131       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2gkt5"
	I0111 09:03:00.961226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="242.278724ms"
	I0111 09:03:00.979020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.739332ms"
	I0111 09:03:00.979105       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.782µs"
	I0111 09:03:01.516342       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0111 09:03:01.541772       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-zl2hj"
	I0111 09:03:01.560593       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.022704ms"
	I0111 09:03:01.571653       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.433432ms"
	I0111 09:03:01.572001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.675µs"
	I0111 09:03:14.899043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.408µs"
	I0111 09:03:14.919115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="198.296µs"
	I0111 09:03:15.032119       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0111 09:03:15.680380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.068437ms"
	I0111 09:03:15.713188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.930614ms"
	I0111 09:03:15.713645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.496µs"
	
	
	==> kube-proxy [0117f21d5f7900078b3ec1ae7f09601635a7c9c8ea3586543c14442a6a62a747] <==
	I0111 09:03:02.224985       1 server_others.go:69] "Using iptables proxy"
	I0111 09:03:02.239213       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0111 09:03:02.262402       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:03:02.265359       1 server_others.go:152] "Using iptables Proxier"
	I0111 09:03:02.265507       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0111 09:03:02.265528       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0111 09:03:02.265605       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0111 09:03:02.265836       1 server.go:846] "Version info" version="v1.28.0"
	I0111 09:03:02.265854       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:03:02.266951       1 config.go:188] "Starting service config controller"
	I0111 09:03:02.267078       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0111 09:03:02.267128       1 config.go:97] "Starting endpoint slice config controller"
	I0111 09:03:02.267156       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0111 09:03:02.268924       1 config.go:315] "Starting node config controller"
	I0111 09:03:02.272865       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0111 09:03:02.272939       1 shared_informer.go:318] Caches are synced for node config
	I0111 09:03:02.367792       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0111 09:03:02.367792       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [83b0b1042486e27cfebcdc05c289d3ea4d2b7cfc59fd6d610b14fb5ae6218db5] <==
	W0111 09:02:43.912236       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0111 09:02:43.912327       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:02:44.771716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0111 09:02:44.771839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0111 09:02:44.838037       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0111 09:02:44.838215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0111 09:02:44.904520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0111 09:02:44.904646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0111 09:02:44.949311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0111 09:02:44.949430       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0111 09:02:45.017275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0111 09:02:45.017417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0111 09:02:45.017519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0111 09:02:45.017573       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0111 09:02:45.029568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0111 09:02:45.029701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0111 09:02:45.039504       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0111 09:02:45.039646       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:02:45.165191       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0111 09:02:45.165945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0111 09:02:45.165896       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0111 09:02:45.166868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0111 09:02:45.234651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0111 09:02:45.234781       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0111 09:02:48.270231       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 11 09:03:01 old-k8s-version-931581 kubelet[1382]: I0111 09:03:01.106997    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/489cf8f4-64d7-44c0-b233-c8235d397932-kube-proxy\") pod \"kube-proxy-xg9bv\" (UID: \"489cf8f4-64d7-44c0-b233-c8235d397932\") " pod="kube-system/kube-proxy-xg9bv"
	Jan 11 09:03:01 old-k8s-version-931581 kubelet[1382]: I0111 09:03:01.107112    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/489cf8f4-64d7-44c0-b233-c8235d397932-lib-modules\") pod \"kube-proxy-xg9bv\" (UID: \"489cf8f4-64d7-44c0-b233-c8235d397932\") " pod="kube-system/kube-proxy-xg9bv"
	Jan 11 09:03:01 old-k8s-version-931581 kubelet[1382]: I0111 09:03:01.107140    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3-cni-cfg\") pod \"kindnet-vl8hm\" (UID: \"1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3\") " pod="kube-system/kindnet-vl8hm"
	Jan 11 09:03:01 old-k8s-version-931581 kubelet[1382]: I0111 09:03:01.107190    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3-xtables-lock\") pod \"kindnet-vl8hm\" (UID: \"1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3\") " pod="kube-system/kindnet-vl8hm"
	Jan 11 09:03:01 old-k8s-version-931581 kubelet[1382]: I0111 09:03:01.107220    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwvpq\" (UniqueName: \"kubernetes.io/projected/1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3-kube-api-access-kwvpq\") pod \"kindnet-vl8hm\" (UID: \"1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3\") " pod="kube-system/kindnet-vl8hm"
	Jan 11 09:03:01 old-k8s-version-931581 kubelet[1382]: I0111 09:03:01.107279    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7pmr\" (UniqueName: \"kubernetes.io/projected/489cf8f4-64d7-44c0-b233-c8235d397932-kube-api-access-k7pmr\") pod \"kube-proxy-xg9bv\" (UID: \"489cf8f4-64d7-44c0-b233-c8235d397932\") " pod="kube-system/kube-proxy-xg9bv"
	Jan 11 09:03:01 old-k8s-version-931581 kubelet[1382]: I0111 09:03:01.107304    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3-lib-modules\") pod \"kindnet-vl8hm\" (UID: \"1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3\") " pod="kube-system/kindnet-vl8hm"
	Jan 11 09:03:01 old-k8s-version-931581 kubelet[1382]: I0111 09:03:01.107363    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/489cf8f4-64d7-44c0-b233-c8235d397932-xtables-lock\") pod \"kube-proxy-xg9bv\" (UID: \"489cf8f4-64d7-44c0-b233-c8235d397932\") " pod="kube-system/kube-proxy-xg9bv"
	Jan 11 09:03:02 old-k8s-version-931581 kubelet[1382]: W0111 09:03:02.126615    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/crio-818951fcd382d0ed8b3157127f68d9e8fa423700b7646c380bb9ab8a595ac135 WatchSource:0}: Error finding container 818951fcd382d0ed8b3157127f68d9e8fa423700b7646c380bb9ab8a595ac135: Status 404 returned error can't find the container with id 818951fcd382d0ed8b3157127f68d9e8fa423700b7646c380bb9ab8a595ac135
	Jan 11 09:03:02 old-k8s-version-931581 kubelet[1382]: I0111 09:03:02.637134    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xg9bv" podStartSLOduration=2.637087529 podCreationTimestamp="2026-01-11 09:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:03:02.636540232 +0000 UTC m=+15.375443804" watchObservedRunningTime="2026-01-11 09:03:02.637087529 +0000 UTC m=+15.375991085"
	Jan 11 09:03:07 old-k8s-version-931581 kubelet[1382]: I0111 09:03:07.577027    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-vl8hm" podStartSLOduration=5.275541271 podCreationTimestamp="2026-01-11 09:03:00 +0000 UTC" firstStartedPulling="2026-01-11 09:03:02.128791403 +0000 UTC m=+14.867694958" lastFinishedPulling="2026-01-11 09:03:04.430222969 +0000 UTC m=+17.169126525" observedRunningTime="2026-01-11 09:03:04.641252993 +0000 UTC m=+17.380156565" watchObservedRunningTime="2026-01-11 09:03:07.576972838 +0000 UTC m=+20.315876402"
	Jan 11 09:03:14 old-k8s-version-931581 kubelet[1382]: I0111 09:03:14.865221    1382 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 11 09:03:14 old-k8s-version-931581 kubelet[1382]: I0111 09:03:14.897237    1382 topology_manager.go:215] "Topology Admit Handler" podUID="fed76c30-7304-4890-9b21-67f48729cb7f" podNamespace="kube-system" podName="coredns-5dd5756b68-2gkt5"
	Jan 11 09:03:14 old-k8s-version-931581 kubelet[1382]: I0111 09:03:14.903682    1382 topology_manager.go:215] "Topology Admit Handler" podUID="d7c7d49d-3c49-49aa-97c5-9692e0c23d99" podNamespace="kube-system" podName="storage-provisioner"
	Jan 11 09:03:15 old-k8s-version-931581 kubelet[1382]: I0111 09:03:15.008313    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwq9s\" (UniqueName: \"kubernetes.io/projected/d7c7d49d-3c49-49aa-97c5-9692e0c23d99-kube-api-access-bwq9s\") pod \"storage-provisioner\" (UID: \"d7c7d49d-3c49-49aa-97c5-9692e0c23d99\") " pod="kube-system/storage-provisioner"
	Jan 11 09:03:15 old-k8s-version-931581 kubelet[1382]: I0111 09:03:15.008381    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d7c7d49d-3c49-49aa-97c5-9692e0c23d99-tmp\") pod \"storage-provisioner\" (UID: \"d7c7d49d-3c49-49aa-97c5-9692e0c23d99\") " pod="kube-system/storage-provisioner"
	Jan 11 09:03:15 old-k8s-version-931581 kubelet[1382]: I0111 09:03:15.008431    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fed76c30-7304-4890-9b21-67f48729cb7f-config-volume\") pod \"coredns-5dd5756b68-2gkt5\" (UID: \"fed76c30-7304-4890-9b21-67f48729cb7f\") " pod="kube-system/coredns-5dd5756b68-2gkt5"
	Jan 11 09:03:15 old-k8s-version-931581 kubelet[1382]: I0111 09:03:15.008460    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lllr\" (UniqueName: \"kubernetes.io/projected/fed76c30-7304-4890-9b21-67f48729cb7f-kube-api-access-4lllr\") pod \"coredns-5dd5756b68-2gkt5\" (UID: \"fed76c30-7304-4890-9b21-67f48729cb7f\") " pod="kube-system/coredns-5dd5756b68-2gkt5"
	Jan 11 09:03:15 old-k8s-version-931581 kubelet[1382]: W0111 09:03:15.212640    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/crio-032517f1fe50715a7e81bfbe90ca2d50b875dd0d59125453f61c4f1ddaf044df WatchSource:0}: Error finding container 032517f1fe50715a7e81bfbe90ca2d50b875dd0d59125453f61c4f1ddaf044df: Status 404 returned error can't find the container with id 032517f1fe50715a7e81bfbe90ca2d50b875dd0d59125453f61c4f1ddaf044df
	Jan 11 09:03:15 old-k8s-version-931581 kubelet[1382]: W0111 09:03:15.249225    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/crio-867431094ee33185bf2d2c2961c81a6b79c8b54558ef3f43d39fd598f18733f5 WatchSource:0}: Error finding container 867431094ee33185bf2d2c2961c81a6b79c8b54558ef3f43d39fd598f18733f5: Status 404 returned error can't find the container with id 867431094ee33185bf2d2c2961c81a6b79c8b54558ef3f43d39fd598f18733f5
	Jan 11 09:03:15 old-k8s-version-931581 kubelet[1382]: I0111 09:03:15.675802    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2gkt5" podStartSLOduration=15.675758356 podCreationTimestamp="2026-01-11 09:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:03:15.675577275 +0000 UTC m=+28.414480839" watchObservedRunningTime="2026-01-11 09:03:15.675758356 +0000 UTC m=+28.414661920"
	Jan 11 09:03:17 old-k8s-version-931581 kubelet[1382]: I0111 09:03:17.973297    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.973256108 podCreationTimestamp="2026-01-11 09:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:03:15.725581098 +0000 UTC m=+28.464484654" watchObservedRunningTime="2026-01-11 09:03:17.973256108 +0000 UTC m=+30.712159664"
	Jan 11 09:03:17 old-k8s-version-931581 kubelet[1382]: I0111 09:03:17.974003    1382 topology_manager.go:215] "Topology Admit Handler" podUID="0d413a31-5797-4ca1-95a0-a108b606a94b" podNamespace="default" podName="busybox"
	Jan 11 09:03:18 old-k8s-version-931581 kubelet[1382]: I0111 09:03:18.027523    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khmts\" (UniqueName: \"kubernetes.io/projected/0d413a31-5797-4ca1-95a0-a108b606a94b-kube-api-access-khmts\") pod \"busybox\" (UID: \"0d413a31-5797-4ca1-95a0-a108b606a94b\") " pod="default/busybox"
	Jan 11 09:03:18 old-k8s-version-931581 kubelet[1382]: W0111 09:03:18.298498    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/crio-5e57a0ccaad97c637b9c8d6550f4cb88d9a539beced0c8e30ac1108f5df1bc9b WatchSource:0}: Error finding container 5e57a0ccaad97c637b9c8d6550f4cb88d9a539beced0c8e30ac1108f5df1bc9b: Status 404 returned error can't find the container with id 5e57a0ccaad97c637b9c8d6550f4cb88d9a539beced0c8e30ac1108f5df1bc9b
	
	
	==> storage-provisioner [5e12578701d3985983f6d18b90c6fb51344dfc11996db81473b40afe4bc05639] <==
	I0111 09:03:15.267447       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:03:15.286232       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:03:15.286349       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0111 09:03:15.307297       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:03:15.307603       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-931581_30cd249b-c908-48fb-b28b-52300e5e073b!
	I0111 09:03:15.309830       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a0480bd-74ad-46e7-a509-867a9d06bbdb", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-931581_30cd249b-c908-48fb-b28b-52300e5e073b became leader
	I0111 09:03:15.408649       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-931581_30cd249b-c908-48fb-b28b-52300e5e073b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-931581 -n old-k8s-version-931581
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-931581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-931581 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-931581 --alsologtostderr -v=1: exit status 80 (1.9102078s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-931581 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 09:04:40.466486  771699 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:04:40.466624  771699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:04:40.466637  771699 out.go:374] Setting ErrFile to fd 2...
	I0111 09:04:40.466642  771699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:04:40.466921  771699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:04:40.467166  771699 out.go:368] Setting JSON to false
	I0111 09:04:40.467183  771699 mustload.go:66] Loading cluster: old-k8s-version-931581
	I0111 09:04:40.467624  771699 config.go:182] Loaded profile config "old-k8s-version-931581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 09:04:40.468072  771699 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:04:40.486796  771699 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:04:40.487130  771699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:04:40.546835  771699 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-11 09:04:40.536839996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:04:40.547526  771699 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-931581 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 09:04:40.551024  771699 out.go:179] * Pausing node old-k8s-version-931581 ... 
	I0111 09:04:40.554795  771699 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:04:40.555198  771699 ssh_runner.go:195] Run: systemctl --version
	I0111 09:04:40.555256  771699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:04:40.572747  771699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:04:40.677155  771699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:04:40.690073  771699 pause.go:52] kubelet running: true
	I0111 09:04:40.690272  771699 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:04:40.917549  771699 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:04:40.917635  771699 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:04:41.009674  771699 cri.go:96] found id: "e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4"
	I0111 09:04:41.009745  771699 cri.go:96] found id: "b727470771749969490fec69bb6f9cc8d254a874166542b81d6c9dc796246f68"
	I0111 09:04:41.009766  771699 cri.go:96] found id: "73d9a283074adb9ebdd703527aa0eca8069e6e36375607411f549fdc67d6fa8e"
	I0111 09:04:41.009787  771699 cri.go:96] found id: "3a849cb62cfb1018959839831eb215de72cee5a888a77c6d5bd24e8f28010ef7"
	I0111 09:04:41.009819  771699 cri.go:96] found id: "1d23c38218c09008ed0624126143b92ef4ae15746f4fb4fec5a67590f7b14aaf"
	I0111 09:04:41.009837  771699 cri.go:96] found id: "3d8f07a9089011370ce578f9e96c1fca727ce96da1e432ccd926d97b1ea3545e"
	I0111 09:04:41.009854  771699 cri.go:96] found id: "8df31809024e57fa523fb773427e68d11f877dc356c992cd0201a1b33573775d"
	I0111 09:04:41.009872  771699 cri.go:96] found id: "be3cae0859a767cb1d810c075beaa74a4697bb90f12bf159bea72e3e87da79a6"
	I0111 09:04:41.009906  771699 cri.go:96] found id: "da8138b59df8207b192f5696b4f20a5ebb599324657398e62b0076ccd122e19f"
	I0111 09:04:41.009935  771699 cri.go:96] found id: "199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576"
	I0111 09:04:41.009955  771699 cri.go:96] found id: "bd56401a5c752eae1d3614979ba146f00260bc3c39f492b048ec64ee36838966"
	I0111 09:04:41.009983  771699 cri.go:96] found id: ""
	I0111 09:04:41.010067  771699 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:04:41.022285  771699 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:04:41Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:04:41.205731  771699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:04:41.218775  771699 pause.go:52] kubelet running: false
	I0111 09:04:41.218846  771699 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:04:41.395613  771699 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:04:41.395692  771699 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:04:41.467303  771699 cri.go:96] found id: "e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4"
	I0111 09:04:41.467326  771699 cri.go:96] found id: "b727470771749969490fec69bb6f9cc8d254a874166542b81d6c9dc796246f68"
	I0111 09:04:41.467331  771699 cri.go:96] found id: "73d9a283074adb9ebdd703527aa0eca8069e6e36375607411f549fdc67d6fa8e"
	I0111 09:04:41.467335  771699 cri.go:96] found id: "3a849cb62cfb1018959839831eb215de72cee5a888a77c6d5bd24e8f28010ef7"
	I0111 09:04:41.467338  771699 cri.go:96] found id: "1d23c38218c09008ed0624126143b92ef4ae15746f4fb4fec5a67590f7b14aaf"
	I0111 09:04:41.467342  771699 cri.go:96] found id: "3d8f07a9089011370ce578f9e96c1fca727ce96da1e432ccd926d97b1ea3545e"
	I0111 09:04:41.467345  771699 cri.go:96] found id: "8df31809024e57fa523fb773427e68d11f877dc356c992cd0201a1b33573775d"
	I0111 09:04:41.467348  771699 cri.go:96] found id: "be3cae0859a767cb1d810c075beaa74a4697bb90f12bf159bea72e3e87da79a6"
	I0111 09:04:41.467351  771699 cri.go:96] found id: "da8138b59df8207b192f5696b4f20a5ebb599324657398e62b0076ccd122e19f"
	I0111 09:04:41.467378  771699 cri.go:96] found id: "199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576"
	I0111 09:04:41.467387  771699 cri.go:96] found id: "bd56401a5c752eae1d3614979ba146f00260bc3c39f492b048ec64ee36838966"
	I0111 09:04:41.467391  771699 cri.go:96] found id: ""
	I0111 09:04:41.467451  771699 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:04:42.009572  771699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:04:42.025543  771699 pause.go:52] kubelet running: false
	I0111 09:04:42.025622  771699 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:04:42.220112  771699 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:04:42.220262  771699 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:04:42.299031  771699 cri.go:96] found id: "e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4"
	I0111 09:04:42.299059  771699 cri.go:96] found id: "b727470771749969490fec69bb6f9cc8d254a874166542b81d6c9dc796246f68"
	I0111 09:04:42.299065  771699 cri.go:96] found id: "73d9a283074adb9ebdd703527aa0eca8069e6e36375607411f549fdc67d6fa8e"
	I0111 09:04:42.299069  771699 cri.go:96] found id: "3a849cb62cfb1018959839831eb215de72cee5a888a77c6d5bd24e8f28010ef7"
	I0111 09:04:42.299072  771699 cri.go:96] found id: "1d23c38218c09008ed0624126143b92ef4ae15746f4fb4fec5a67590f7b14aaf"
	I0111 09:04:42.299076  771699 cri.go:96] found id: "3d8f07a9089011370ce578f9e96c1fca727ce96da1e432ccd926d97b1ea3545e"
	I0111 09:04:42.299079  771699 cri.go:96] found id: "8df31809024e57fa523fb773427e68d11f877dc356c992cd0201a1b33573775d"
	I0111 09:04:42.299082  771699 cri.go:96] found id: "be3cae0859a767cb1d810c075beaa74a4697bb90f12bf159bea72e3e87da79a6"
	I0111 09:04:42.299085  771699 cri.go:96] found id: "da8138b59df8207b192f5696b4f20a5ebb599324657398e62b0076ccd122e19f"
	I0111 09:04:42.299092  771699 cri.go:96] found id: "199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576"
	I0111 09:04:42.299096  771699 cri.go:96] found id: "bd56401a5c752eae1d3614979ba146f00260bc3c39f492b048ec64ee36838966"
	I0111 09:04:42.299099  771699 cri.go:96] found id: ""
	I0111 09:04:42.299162  771699 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:04:42.315145  771699 out.go:203] 
	W0111 09:04:42.318247  771699 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:04:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:04:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 09:04:42.318272  771699 out.go:285] * 
	* 
	W0111 09:04:42.323264  771699 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 09:04:42.326373  771699 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-931581 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-931581
helpers_test.go:244: (dbg) docker inspect old-k8s-version-931581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b",
	        "Created": "2026-01-11T09:02:21.912162594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 769087,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:03:42.121536557Z",
	            "FinishedAt": "2026-01-11T09:03:41.318381845Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/hostname",
	        "HostsPath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/hosts",
	        "LogPath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b-json.log",
	        "Name": "/old-k8s-version-931581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-931581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-931581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b",
	                "LowerDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-931581",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-931581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-931581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-931581",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-931581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b2599b4a2f29ee1d2d93e1ba3e739d497fafb92840149cc10075758d2020696",
	            "SandboxKey": "/var/run/docker/netns/6b2599b4a2f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-931581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:36:dd:d7:4c:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b56797f12ccaa56ea8e718a635d68c0d137f49a40ab56b2bf2b5a235f2e0cf2",
	                    "EndpointID": "0e3594e8b2d25b5e352ec06e5a6c339997754b94d98d8b3b5b1fdf9f27761917",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-931581",
	                        "93b661cce923"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-931581 -n old-k8s-version-931581
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-931581 -n old-k8s-version-931581: exit status 2 (396.577956ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-931581 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-931581 logs -n 25: (1.344175502s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-293572 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo containerd config dump                                                                                                                                                                                                  │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo crio config                                                                                                                                                                                                             │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ delete  │ -p cilium-293572                                                                                                                                                                                                                              │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:55 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:56 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ delete  │ -p cert-expiration-448134                                                                                                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ start   │ -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-630015 │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │                     │
	│ delete  │ -p force-systemd-env-472282                                                                                                                                                                                                                   │ force-systemd-env-472282  │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:01 UTC │
	│ start   │ -p cert-options-459267 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ cert-options-459267 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ -p cert-options-459267 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ delete  │ -p cert-options-459267                                                                                                                                                                                                                        │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │                     │
	│ stop    │ -p old-k8s-version-931581 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-931581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:04 UTC │
	│ image   │ old-k8s-version-931581 image list --format=json                                                                                                                                                                                               │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ pause   │ -p old-k8s-version-931581 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:03:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:03:41.836315  768959 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:03:41.836465  768959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:03:41.836477  768959 out.go:374] Setting ErrFile to fd 2...
	I0111 09:03:41.836483  768959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:03:41.836753  768959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:03:41.837132  768959 out.go:368] Setting JSON to false
	I0111 09:03:41.837979  768959 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13572,"bootTime":1768108650,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:03:41.838051  768959 start.go:143] virtualization:  
	I0111 09:03:41.841213  768959 out.go:179] * [old-k8s-version-931581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:03:41.845082  768959 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:03:41.845163  768959 notify.go:221] Checking for updates...
	I0111 09:03:41.851082  768959 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:03:41.854080  768959 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:03:41.856984  768959 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:03:41.859877  768959 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:03:41.862689  768959 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:03:41.866221  768959 config.go:182] Loaded profile config "old-k8s-version-931581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 09:03:41.869589  768959 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I0111 09:03:41.872376  768959 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:03:41.903262  768959 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:03:41.903391  768959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:03:41.966578  768959 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:03:41.949654596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:03:41.966680  768959 docker.go:319] overlay module found
	I0111 09:03:41.971676  768959 out.go:179] * Using the docker driver based on existing profile
	I0111 09:03:41.974512  768959 start.go:309] selected driver: docker
	I0111 09:03:41.974538  768959 start.go:928] validating driver "docker" against &{Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:03:41.974641  768959 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:03:41.975370  768959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:03:42.026007  768959 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:03:42.016249935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:03:42.026579  768959 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:03:42.026618  768959 cni.go:84] Creating CNI manager for ""
	I0111 09:03:42.026674  768959 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:03:42.026711  768959 start.go:353] cluster config:
	{Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:03:42.031660  768959 out.go:179] * Starting "old-k8s-version-931581" primary control-plane node in "old-k8s-version-931581" cluster
	I0111 09:03:42.034482  768959 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:03:42.037450  768959 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:03:42.040392  768959 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 09:03:42.040451  768959 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:03:42.040465  768959 cache.go:65] Caching tarball of preloaded images
	I0111 09:03:42.040555  768959 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:03:42.040568  768959 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0111 09:03:42.040713  768959 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/config.json ...
	I0111 09:03:42.040948  768959 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:03:42.061475  768959 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:03:42.061500  768959 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:03:42.061522  768959 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:03:42.061555  768959 start.go:360] acquireMachinesLock for old-k8s-version-931581: {Name:mkab3bc7162aba2e88171e4e683a8fd13db4db95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:03:42.061630  768959 start.go:364] duration metric: took 53.26µs to acquireMachinesLock for "old-k8s-version-931581"
	I0111 09:03:42.061652  768959 start.go:96] Skipping create...Using existing machine configuration
	I0111 09:03:42.061657  768959 fix.go:54] fixHost starting: 
	I0111 09:03:42.061916  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:42.081123  768959 fix.go:112] recreateIfNeeded on old-k8s-version-931581: state=Stopped err=<nil>
	W0111 09:03:42.081167  768959 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 09:03:42.084588  768959 out.go:252] * Restarting existing docker container for "old-k8s-version-931581" ...
	I0111 09:03:42.084704  768959 cli_runner.go:164] Run: docker start old-k8s-version-931581
	I0111 09:03:42.376608  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:42.400088  768959 kic.go:430] container "old-k8s-version-931581" state is running.
	I0111 09:03:42.400476  768959 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-931581
	I0111 09:03:42.426388  768959 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/config.json ...
	I0111 09:03:42.426725  768959 machine.go:94] provisionDockerMachine start ...
	I0111 09:03:42.426905  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:42.457819  768959 main.go:144] libmachine: Using SSH client type: native
	I0111 09:03:42.458184  768959 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33788 <nil> <nil>}
	I0111 09:03:42.458199  768959 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:03:42.459226  768959 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37394->127.0.0.1:33788: read: connection reset by peer
	I0111 09:03:45.609748  768959 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-931581
	
	I0111 09:03:45.609802  768959 ubuntu.go:182] provisioning hostname "old-k8s-version-931581"
	I0111 09:03:45.609868  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:45.628076  768959 main.go:144] libmachine: Using SSH client type: native
	I0111 09:03:45.628405  768959 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33788 <nil> <nil>}
	I0111 09:03:45.628425  768959 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-931581 && echo "old-k8s-version-931581" | sudo tee /etc/hostname
	I0111 09:03:45.791762  768959 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-931581
	
	I0111 09:03:45.791929  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:45.809895  768959 main.go:144] libmachine: Using SSH client type: native
	I0111 09:03:45.810252  768959 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33788 <nil> <nil>}
	I0111 09:03:45.810277  768959 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-931581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-931581/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-931581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:03:45.958399  768959 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:03:45.958426  768959 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:03:45.958457  768959 ubuntu.go:190] setting up certificates
	I0111 09:03:45.958466  768959 provision.go:84] configureAuth start
	I0111 09:03:45.958548  768959 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-931581
	I0111 09:03:45.976582  768959 provision.go:143] copyHostCerts
	I0111 09:03:45.976665  768959 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:03:45.976687  768959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:03:45.976773  768959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:03:45.976887  768959 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:03:45.976899  768959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:03:45.976926  768959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:03:45.976991  768959 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:03:45.977000  768959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:03:45.977024  768959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:03:45.977087  768959 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-931581 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-931581]
	I0111 09:03:46.333789  768959 provision.go:177] copyRemoteCerts
	I0111 09:03:46.333865  768959 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:03:46.333912  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:46.353564  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:46.457925  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:03:46.475048  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0111 09:03:46.493352  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 09:03:46.511639  768959 provision.go:87] duration metric: took 553.151074ms to configureAuth
	I0111 09:03:46.511665  768959 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:03:46.511859  768959 config.go:182] Loaded profile config "old-k8s-version-931581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 09:03:46.511973  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:46.529339  768959 main.go:144] libmachine: Using SSH client type: native
	I0111 09:03:46.529664  768959 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33788 <nil> <nil>}
	I0111 09:03:46.529685  768959 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:03:46.879368  768959 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:03:46.879396  768959 machine.go:97] duration metric: took 4.452656304s to provisionDockerMachine
	I0111 09:03:46.879418  768959 start.go:293] postStartSetup for "old-k8s-version-931581" (driver="docker")
	I0111 09:03:46.879429  768959 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:03:46.879498  768959 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:03:46.879553  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:46.903551  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:47.014691  768959 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:03:47.018033  768959 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:03:47.018059  768959 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:03:47.018070  768959 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:03:47.018148  768959 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:03:47.018235  768959 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:03:47.018332  768959 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:03:47.025783  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:03:47.043328  768959 start.go:296] duration metric: took 163.894034ms for postStartSetup
	I0111 09:03:47.043425  768959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:03:47.043473  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:47.059893  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:47.163192  768959 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:03:47.167713  768959 fix.go:56] duration metric: took 5.106048651s for fixHost
	I0111 09:03:47.167741  768959 start.go:83] releasing machines lock for "old-k8s-version-931581", held for 5.106102255s
	I0111 09:03:47.167810  768959 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-931581
	I0111 09:03:47.184096  768959 ssh_runner.go:195] Run: cat /version.json
	I0111 09:03:47.184146  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:47.184219  768959 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:03:47.184284  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:47.206565  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:47.207757  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:47.406544  768959 ssh_runner.go:195] Run: systemctl --version
	I0111 09:03:47.413198  768959 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:03:47.447845  768959 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:03:47.452193  768959 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:03:47.452266  768959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:03:47.459966  768959 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 09:03:47.459988  768959 start.go:496] detecting cgroup driver to use...
	I0111 09:03:47.460020  768959 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:03:47.460067  768959 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:03:47.475325  768959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:03:47.488313  768959 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:03:47.488396  768959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:03:47.504151  768959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:03:47.517276  768959 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:03:47.622977  768959 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:03:47.761195  768959 docker.go:234] disabling docker service ...
	I0111 09:03:47.761321  768959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:03:47.776041  768959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:03:47.789317  768959 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:03:47.898339  768959 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:03:48.018575  768959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:03:48.033148  768959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:03:48.047901  768959 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0111 09:03:48.047984  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.057235  768959 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:03:48.057322  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.066966  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.076014  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.085097  768959 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:03:48.093004  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.102754  768959 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.111763  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.120640  768959 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:03:48.128218  768959 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:03:48.135896  768959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:03:48.251733  768959 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:03:48.424781  768959 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:03:48.424852  768959 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:03:48.428764  768959 start.go:574] Will wait 60s for crictl version
	I0111 09:03:48.428835  768959 ssh_runner.go:195] Run: which crictl
	I0111 09:03:48.432964  768959 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:03:48.465091  768959 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:03:48.465188  768959 ssh_runner.go:195] Run: crio --version
	I0111 09:03:48.499257  768959 ssh_runner.go:195] Run: crio --version
	I0111 09:03:48.537060  768959 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I0111 09:03:48.539871  768959 cli_runner.go:164] Run: docker network inspect old-k8s-version-931581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:03:48.556782  768959 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:03:48.560729  768959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:03:48.570986  768959 kubeadm.go:884] updating cluster {Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:03:48.571106  768959 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 09:03:48.571157  768959 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:03:48.615603  768959 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:03:48.615629  768959 crio.go:433] Images already preloaded, skipping extraction
	I0111 09:03:48.615687  768959 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:03:48.640641  768959 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:03:48.640666  768959 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:03:48.640675  768959 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I0111 09:03:48.640770  768959 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-931581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:03:48.640858  768959 ssh_runner.go:195] Run: crio config
	I0111 09:03:48.706529  768959 cni.go:84] Creating CNI manager for ""
	I0111 09:03:48.706554  768959 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:03:48.706572  768959 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:03:48.706595  768959 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-931581 NodeName:old-k8s-version-931581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:03:48.706735  768959 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-931581"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:03:48.706815  768959 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0111 09:03:48.714547  768959 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:03:48.714619  768959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:03:48.722252  768959 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0111 09:03:48.734823  768959 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:03:48.748025  768959 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0111 09:03:48.760730  768959 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:03:48.764176  768959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:03:48.773591  768959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:03:48.881346  768959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:03:48.900245  768959 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581 for IP: 192.168.85.2
	I0111 09:03:48.900321  768959 certs.go:195] generating shared ca certs ...
	I0111 09:03:48.900354  768959 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:48.900560  768959 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:03:48.900678  768959 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:03:48.900707  768959 certs.go:257] generating profile certs ...
	I0111 09:03:48.900864  768959 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.key
	I0111 09:03:48.900982  768959 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key.eb6f276c
	I0111 09:03:48.901064  768959 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.key
	I0111 09:03:48.901225  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:03:48.901292  768959 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:03:48.901317  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:03:48.901380  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:03:48.901443  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:03:48.901510  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:03:48.901603  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:03:48.903921  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:03:48.928157  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:03:48.948748  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:03:48.967884  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:03:48.987759  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0111 09:03:49.008317  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 09:03:49.025818  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:03:49.045747  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 09:03:49.083444  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:03:49.103763  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:03:49.133045  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:03:49.155528  768959 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:03:49.169891  768959 ssh_runner.go:195] Run: openssl version
	I0111 09:03:49.176436  768959 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:03:49.189097  768959 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:03:49.197914  768959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:03:49.202202  768959 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:03:49.202315  768959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:03:49.244986  768959 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:03:49.255921  768959 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:03:49.263352  768959 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:03:49.271071  768959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:03:49.274984  768959 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:03:49.275049  768959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:03:49.316937  768959 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:03:49.324318  768959 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:03:49.331831  768959 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:03:49.339547  768959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:03:49.343102  768959 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:03:49.343171  768959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:03:49.384086  768959 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:03:49.391471  768959 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:03:49.395056  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 09:03:49.435845  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 09:03:49.476812  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 09:03:49.519560  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 09:03:49.569900  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 09:03:49.622002  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 09:03:49.686292  768959 kubeadm.go:401] StartCluster: {Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:03:49.686446  768959 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:03:49.686568  768959 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:03:49.747563  768959 cri.go:96] found id: "3d8f07a9089011370ce578f9e96c1fca727ce96da1e432ccd926d97b1ea3545e"
	I0111 09:03:49.747636  768959 cri.go:96] found id: "8df31809024e57fa523fb773427e68d11f877dc356c992cd0201a1b33573775d"
	I0111 09:03:49.747654  768959 cri.go:96] found id: "be3cae0859a767cb1d810c075beaa74a4697bb90f12bf159bea72e3e87da79a6"
	I0111 09:03:49.747675  768959 cri.go:96] found id: "da8138b59df8207b192f5696b4f20a5ebb599324657398e62b0076ccd122e19f"
	I0111 09:03:49.747716  768959 cri.go:96] found id: ""
	I0111 09:03:49.747810  768959 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 09:03:49.765325  768959 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:03:49Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:03:49.765461  768959 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:03:49.783209  768959 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 09:03:49.783294  768959 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 09:03:49.783385  768959 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 09:03:49.795110  768959 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 09:03:49.795617  768959 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-931581" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:03:49.795767  768959 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-575040/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-931581" cluster setting kubeconfig missing "old-k8s-version-931581" context setting]
	I0111 09:03:49.796114  768959 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:49.797748  768959 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 09:03:49.816928  768959 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0111 09:03:49.817013  768959 kubeadm.go:602] duration metric: took 33.69811ms to restartPrimaryControlPlane
	I0111 09:03:49.817038  768959 kubeadm.go:403] duration metric: took 130.756021ms to StartCluster
	I0111 09:03:49.817083  768959 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:49.817193  768959 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:03:49.817968  768959 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:49.818352  768959 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:03:49.819014  768959 config.go:182] Loaded profile config "old-k8s-version-931581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 09:03:49.819153  768959 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:03:49.819285  768959 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-931581"
	I0111 09:03:49.819336  768959 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-931581"
	W0111 09:03:49.819357  768959 addons.go:248] addon storage-provisioner should already be in state true
	I0111 09:03:49.819413  768959 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:03:49.820048  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:49.820272  768959 addons.go:70] Setting dashboard=true in profile "old-k8s-version-931581"
	I0111 09:03:49.820311  768959 addons.go:239] Setting addon dashboard=true in "old-k8s-version-931581"
	W0111 09:03:49.820349  768959 addons.go:248] addon dashboard should already be in state true
	I0111 09:03:49.820393  768959 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:03:49.820909  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:49.821434  768959 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-931581"
	I0111 09:03:49.821499  768959 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-931581"
	I0111 09:03:49.821783  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:49.830205  768959 out.go:179] * Verifying Kubernetes components...
	I0111 09:03:49.838896  768959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:03:49.882687  768959 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 09:03:49.882876  768959 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:03:49.884888  768959 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-931581"
	W0111 09:03:49.884921  768959 addons.go:248] addon default-storageclass should already be in state true
	I0111 09:03:49.884951  768959 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:03:49.885405  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:49.886658  768959 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:03:49.886679  768959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:03:49.886736  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:49.891458  768959 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 09:03:49.894306  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 09:03:49.894339  768959 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 09:03:49.894407  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:49.922367  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:49.943114  768959 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:03:49.943136  768959 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:03:49.943197  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:49.957880  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:49.980970  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:50.228536  768959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:03:50.240273  768959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:03:50.253204  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 09:03:50.253230  768959 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 09:03:50.290634  768959 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-931581" to be "Ready" ...
	I0111 09:03:50.315659  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 09:03:50.315685  768959 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 09:03:50.356632  768959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:03:50.406675  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 09:03:50.406702  768959 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 09:03:50.481247  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 09:03:50.481273  768959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 09:03:50.565996  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 09:03:50.566021  768959 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 09:03:50.593296  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 09:03:50.593323  768959 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 09:03:50.610608  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 09:03:50.610635  768959 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 09:03:50.634678  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 09:03:50.634705  768959 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 09:03:50.655115  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:03:50.655142  768959 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 09:03:50.673167  768959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:03:54.362830  768959 node_ready.go:49] node "old-k8s-version-931581" is "Ready"
	I0111 09:03:54.362915  768959 node_ready.go:38] duration metric: took 4.072245808s for node "old-k8s-version-931581" to be "Ready" ...
	I0111 09:03:54.362965  768959 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:03:54.363070  768959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:03:56.034391  768959 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.794078106s)
	I0111 09:03:56.034441  768959 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.67778659s)
	I0111 09:03:56.580676  768959 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.90746225s)
	I0111 09:03:56.580764  768959 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.21759409s)
	I0111 09:03:56.580839  768959 api_server.go:72] duration metric: took 6.762420682s to wait for apiserver process to appear ...
	I0111 09:03:56.580847  768959 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:03:56.580867  768959 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:03:56.583725  768959 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-931581 addons enable metrics-server
	
	I0111 09:03:56.586595  768959 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I0111 09:03:56.589548  768959 addons.go:530] duration metric: took 6.770412998s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0111 09:03:56.590498  768959 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 09:03:56.592242  768959 api_server.go:141] control plane version: v1.28.0
	I0111 09:03:56.592272  768959 api_server.go:131] duration metric: took 11.417387ms to wait for apiserver health ...
	I0111 09:03:56.592282  768959 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:03:56.596252  768959 system_pods.go:59] 8 kube-system pods found
	I0111 09:03:56.596298  768959 system_pods.go:61] "coredns-5dd5756b68-2gkt5" [fed76c30-7304-4890-9b21-67f48729cb7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:03:56.596309  768959 system_pods.go:61] "etcd-old-k8s-version-931581" [2846557f-ef29-426a-9620-d7182b3d2e5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:03:56.596314  768959 system_pods.go:61] "kindnet-vl8hm" [1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3] Running
	I0111 09:03:56.596321  768959 system_pods.go:61] "kube-apiserver-old-k8s-version-931581" [8a8af346-ef92-4b59-9a35-5bcfa837543f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:03:56.596328  768959 system_pods.go:61] "kube-controller-manager-old-k8s-version-931581" [9cd07045-f552-41ef-8ea6-c2584ba61279] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:03:56.596339  768959 system_pods.go:61] "kube-proxy-xg9bv" [489cf8f4-64d7-44c0-b233-c8235d397932] Running
	I0111 09:03:56.596353  768959 system_pods.go:61] "kube-scheduler-old-k8s-version-931581" [8feb9a4e-b8ad-473e-ac05-2ce9ed02a7d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:03:56.596358  768959 system_pods.go:61] "storage-provisioner" [d7c7d49d-3c49-49aa-97c5-9692e0c23d99] Running
	I0111 09:03:56.596365  768959 system_pods.go:74] duration metric: took 4.058809ms to wait for pod list to return data ...
	I0111 09:03:56.596377  768959 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:03:56.598939  768959 default_sa.go:45] found service account: "default"
	I0111 09:03:56.598968  768959 default_sa.go:55] duration metric: took 2.58515ms for default service account to be created ...
	I0111 09:03:56.598978  768959 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:03:56.602716  768959 system_pods.go:86] 8 kube-system pods found
	I0111 09:03:56.602748  768959 system_pods.go:89] "coredns-5dd5756b68-2gkt5" [fed76c30-7304-4890-9b21-67f48729cb7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:03:56.602758  768959 system_pods.go:89] "etcd-old-k8s-version-931581" [2846557f-ef29-426a-9620-d7182b3d2e5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:03:56.602764  768959 system_pods.go:89] "kindnet-vl8hm" [1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3] Running
	I0111 09:03:56.602777  768959 system_pods.go:89] "kube-apiserver-old-k8s-version-931581" [8a8af346-ef92-4b59-9a35-5bcfa837543f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:03:56.602785  768959 system_pods.go:89] "kube-controller-manager-old-k8s-version-931581" [9cd07045-f552-41ef-8ea6-c2584ba61279] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:03:56.602791  768959 system_pods.go:89] "kube-proxy-xg9bv" [489cf8f4-64d7-44c0-b233-c8235d397932] Running
	I0111 09:03:56.602804  768959 system_pods.go:89] "kube-scheduler-old-k8s-version-931581" [8feb9a4e-b8ad-473e-ac05-2ce9ed02a7d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:03:56.602809  768959 system_pods.go:89] "storage-provisioner" [d7c7d49d-3c49-49aa-97c5-9692e0c23d99] Running
	I0111 09:03:56.602816  768959 system_pods.go:126] duration metric: took 3.832985ms to wait for k8s-apps to be running ...
	I0111 09:03:56.602828  768959 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:03:56.602887  768959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:03:56.616966  768959 system_svc.go:56] duration metric: took 14.127633ms WaitForService to wait for kubelet
	I0111 09:03:56.617001  768959 kubeadm.go:587] duration metric: took 6.798581535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:03:56.617021  768959 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:03:56.620291  768959 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:03:56.620328  768959 node_conditions.go:123] node cpu capacity is 2
	I0111 09:03:56.620345  768959 node_conditions.go:105] duration metric: took 3.314587ms to run NodePressure ...
	I0111 09:03:56.620359  768959 start.go:242] waiting for startup goroutines ...
	I0111 09:03:56.620367  768959 start.go:247] waiting for cluster config update ...
	I0111 09:03:56.620383  768959 start.go:256] writing updated cluster config ...
	I0111 09:03:56.620674  768959 ssh_runner.go:195] Run: rm -f paused
	I0111 09:03:56.624719  768959 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:03:56.632869  768959 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-2gkt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:00.598421  757749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000473872s
	I0111 09:04:00.598459  757749 kubeadm.go:319] 
	I0111 09:04:00.598526  757749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 09:04:00.598567  757749 kubeadm.go:319] 	- The kubelet is not running
	I0111 09:04:00.598685  757749 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 09:04:00.598696  757749 kubeadm.go:319] 
	I0111 09:04:00.598811  757749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 09:04:00.598848  757749 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 09:04:00.598889  757749 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 09:04:00.598899  757749 kubeadm.go:319] 
	I0111 09:04:00.609837  757749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:04:00.610361  757749 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:04:00.610477  757749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:04:00.610770  757749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 09:04:00.610789  757749 kubeadm.go:319] 
	I0111 09:04:00.610865  757749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0111 09:04:00.611020  757749 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000473872s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 09:04:00.611112  757749 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0111 09:04:01.030167  757749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:04:01.043832  757749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 09:04:01.043904  757749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 09:04:01.052245  757749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 09:04:01.052265  757749 kubeadm.go:158] found existing configuration files:
	
	I0111 09:04:01.052317  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 09:04:01.060474  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 09:04:01.060546  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 09:04:01.068369  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 09:04:01.076442  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 09:04:01.076507  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 09:04:01.084958  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 09:04:01.093001  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 09:04:01.093111  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 09:04:01.104919  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 09:04:01.116271  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 09:04:01.116345  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 09:04:01.125437  757749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 09:04:01.180127  757749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 09:04:01.180214  757749 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 09:04:01.263691  757749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 09:04:01.263771  757749 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 09:04:01.263812  757749 kubeadm.go:319] OS: Linux
	I0111 09:04:01.263863  757749 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 09:04:01.263922  757749 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 09:04:01.263981  757749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 09:04:01.264035  757749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 09:04:01.264089  757749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 09:04:01.264142  757749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 09:04:01.264192  757749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 09:04:01.264249  757749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 09:04:01.264301  757749 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 09:04:01.332134  757749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 09:04:01.332257  757749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 09:04:01.332354  757749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 09:04:01.340058  757749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0111 09:03:58.638897  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:00.642798  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	I0111 09:04:01.345221  757749 out.go:252]   - Generating certificates and keys ...
	I0111 09:04:01.345313  757749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 09:04:01.345383  757749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 09:04:01.345459  757749 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 09:04:01.345520  757749 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 09:04:01.345591  757749 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 09:04:01.345645  757749 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 09:04:01.345707  757749 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 09:04:01.345769  757749 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 09:04:01.346280  757749 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 09:04:01.346736  757749 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 09:04:01.347161  757749 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 09:04:01.347255  757749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 09:04:02.153749  757749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 09:04:02.549592  757749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 09:04:02.718485  757749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 09:04:03.108587  757749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 09:04:03.292500  757749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 09:04:03.293149  757749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 09:04:03.295747  757749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W0111 09:04:03.138882  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:05.139310  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	I0111 09:04:03.298929  757749 out.go:252]   - Booting up control plane ...
	I0111 09:04:03.299039  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 09:04:03.299122  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 09:04:03.300853  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 09:04:03.316699  757749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 09:04:03.316810  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 09:04:03.325052  757749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 09:04:03.325437  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 09:04:03.325591  757749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:04:03.470373  757749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 09:04:03.470503  757749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W0111 09:04:07.163015  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:09.639708  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:11.640257  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:14.139409  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:16.139540  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:18.638378  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:20.639219  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:23.138715  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:25.139724  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	I0111 09:04:27.139152  768959 pod_ready.go:94] pod "coredns-5dd5756b68-2gkt5" is "Ready"
	I0111 09:04:27.139183  768959 pod_ready.go:86] duration metric: took 30.506287443s for pod "coredns-5dd5756b68-2gkt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.143272  768959 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.148395  768959 pod_ready.go:94] pod "etcd-old-k8s-version-931581" is "Ready"
	I0111 09:04:27.148420  768959 pod_ready.go:86] duration metric: took 5.118548ms for pod "etcd-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.151604  768959 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.156952  768959 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-931581" is "Ready"
	I0111 09:04:27.156984  768959 pod_ready.go:86] duration metric: took 5.350797ms for pod "kube-apiserver-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.160374  768959 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.337279  768959 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-931581" is "Ready"
	I0111 09:04:27.337306  768959 pod_ready.go:86] duration metric: took 176.903267ms for pod "kube-controller-manager-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.538349  768959 pod_ready.go:83] waiting for pod "kube-proxy-xg9bv" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.936908  768959 pod_ready.go:94] pod "kube-proxy-xg9bv" is "Ready"
	I0111 09:04:27.936939  768959 pod_ready.go:86] duration metric: took 398.563765ms for pod "kube-proxy-xg9bv" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:28.138352  768959 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:28.537008  768959 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-931581" is "Ready"
	I0111 09:04:28.537037  768959 pod_ready.go:86] duration metric: took 398.612322ms for pod "kube-scheduler-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:28.537050  768959 pod_ready.go:40] duration metric: took 31.912293732s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:04:28.591854  768959 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I0111 09:04:28.595112  768959 out.go:203] 
	W0111 09:04:28.598081  768959 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I0111 09:04:28.601082  768959 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:04:28.604158  768959 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-931581" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 09:04:26 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:26.306287466Z" level=info msg="Created container e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4: kube-system/storage-provisioner/storage-provisioner" id=c04d1d04-cc18-4e75-8337-b49154d32717 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:04:26 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:26.306949522Z" level=info msg="Starting container: e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4" id=bccd96b7-30e5-4eb8-84c9-ed01901c9edb name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:04:26 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:26.308764413Z" level=info msg="Started container" PID=1680 containerID=e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4 description=kube-system/storage-provisioner/storage-provisioner id=bccd96b7-30e5-4eb8-84c9-ed01901c9edb name=/runtime.v1.RuntimeService/StartContainer sandboxID=06de23e6fe6b6bba446a3779a4d29d1906e61285af1dd7f3f7e84f78426f901e
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.653723391Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8eb90bce-5049-41c7-8356-55330fdbfdbe name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.654662382Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=36495cd5-2161-4a4a-8b41-e7a98ced1233 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.655679059Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2/dashboard-metrics-scraper" id=091bb177-afd5-4730-92e2-d09f7d7ef323 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.655808628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.662221429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.662770064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.680174137Z" level=info msg="Created container 199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2/dashboard-metrics-scraper" id=091bb177-afd5-4730-92e2-d09f7d7ef323 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.68102575Z" level=info msg="Starting container: 199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576" id=c1e7433a-9569-4384-898f-e7665b62f2f9 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.683469515Z" level=info msg="Started container" PID=1695 containerID=199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2/dashboard-metrics-scraper id=c1e7433a-9569-4384-898f-e7665b62f2f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=069e662cb13297aed2df5077ddeb1b0ded5aae90a25dd7aef31464b758937a1f
	Jan 11 09:04:27 old-k8s-version-931581 conmon[1693]: conmon 199afbf4b56c27ef4457 <ninfo>: container 1695 exited with status 1
	Jan 11 09:04:28 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:28.28847287Z" level=info msg="Removing container: 09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e" id=b2a0a8b4-4ff1-406f-88d2-58dfe03a4fee name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:04:28 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:28.298464203Z" level=info msg="Error loading conmon cgroup of container 09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e: cgroup deleted" id=b2a0a8b4-4ff1-406f-88d2-58dfe03a4fee name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:04:28 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:28.303644865Z" level=info msg="Removed container 09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2/dashboard-metrics-scraper" id=b2a0a8b4-4ff1-406f-88d2-58dfe03a4fee name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.067062977Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.067102912Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.072452954Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.07248996Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.077037235Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.077075972Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.077104264Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.081672503Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.081709443Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	199afbf4b56c2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   069e662cb1329       dashboard-metrics-scraper-5f989dc9cf-4xxq2       kubernetes-dashboard
	e2c5f1a589123       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   06de23e6fe6b6       storage-provisioner                              kube-system
	bd56401a5c752       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   bffb8bc3abe74       kubernetes-dashboard-8694d4445c-cnrhh            kubernetes-dashboard
	b727470771749       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           47 seconds ago      Running             coredns                     1                   3a22c43f052cc       coredns-5dd5756b68-2gkt5                         kube-system
	00df60b332b70       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           47 seconds ago      Running             busybox                     1                   cfe5770e056c5       busybox                                          default
	73d9a283074ad       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           47 seconds ago      Exited              storage-provisioner         1                   06de23e6fe6b6       storage-provisioner                              kube-system
	3a849cb62cfb1       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           47 seconds ago      Running             kindnet-cni                 1                   2179f8076dfd6       kindnet-vl8hm                                    kube-system
	1d23c38218c09       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           47 seconds ago      Running             kube-proxy                  1                   563fc471abaf2       kube-proxy-xg9bv                                 kube-system
	3d8f07a908901       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           53 seconds ago      Running             etcd                        1                   19b70a2b5b8d3       etcd-old-k8s-version-931581                      kube-system
	8df31809024e5       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           53 seconds ago      Running             kube-scheduler              1                   7e2b7d3286e45       kube-scheduler-old-k8s-version-931581            kube-system
	be3cae0859a76       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           53 seconds ago      Running             kube-apiserver              1                   17fcc74ee9ebc       kube-apiserver-old-k8s-version-931581            kube-system
	da8138b59df82       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           53 seconds ago      Running             kube-controller-manager     1                   2c5ccd8484c4a       kube-controller-manager-old-k8s-version-931581   kube-system
	
	
	==> coredns [b727470771749969490fec69bb6f9cc8d254a874166542b81d6c9dc796246f68] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41102 - 41255 "HINFO IN 2972062232650211598.4557670974590327025. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005469954s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-931581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-931581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=old-k8s-version-931581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_02_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:02:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-931581
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:04:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:04:24 +0000   Sun, 11 Jan 2026 09:02:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:04:24 +0000   Sun, 11 Jan 2026 09:02:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:04:24 +0000   Sun, 11 Jan 2026 09:02:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:04:24 +0000   Sun, 11 Jan 2026 09:03:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-931581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                af69ca9e-bf38-4107-aa6e-3001379de44e
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-2gkt5                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     103s
	  kube-system                 etcd-old-k8s-version-931581                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-vl8hm                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-931581             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-931581    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-xg9bv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-931581             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4xxq2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-cnrhh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node old-k8s-version-931581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node old-k8s-version-931581 event: Registered Node old-k8s-version-931581 in Controller
	  Normal  NodeReady                89s                kubelet          Node old-k8s-version-931581 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-931581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node old-k8s-version-931581 event: Registered Node old-k8s-version-931581 in Controller
	
	
	==> dmesg <==
	[Jan11 08:30] overlayfs: idmapped layers are currently not supported
	[Jan11 08:31] overlayfs: idmapped layers are currently not supported
	[Jan11 08:32] overlayfs: idmapped layers are currently not supported
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3d8f07a9089011370ce578f9e96c1fca727ce96da1e432ccd926d97b1ea3545e] <==
	{"level":"info","ts":"2026-01-11T09:03:50.229823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T09:03:50.229864Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T09:03:50.240393Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9f0758e1c58a86ed","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2026-01-11T09:03:50.240723Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:03:50.240754Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:03:50.240766Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:03:50.247632Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-11T09:03:50.247836Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T09:03:50.24786Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T09:03:50.24795Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:03:50.247958Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:03:50.906182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:03:50.906235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:03:50.906264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:03:50.906278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:03:50.906285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:03:50.906295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:03:50.906304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:03:50.916406Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-931581 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:03:50.916524Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:03:50.916581Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:03:50.917537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:03:50.920415Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:03:50.92048Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:03:50.921731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 09:04:43 up  3:47,  0 user,  load average: 1.23, 1.40, 1.86
	Linux old-k8s-version-931581 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a849cb62cfb1018959839831eb215de72cee5a888a77c6d5bd24e8f28010ef7] <==
	I0111 09:03:55.841344       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:03:55.841709       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:03:55.841847       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:03:55.841859       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:03:55.841871       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:03:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:03:56.051187       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:03:56.051284       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:03:56.051321       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:03:56.052226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0111 09:04:26.052030       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0111 09:04:26.052038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0111 09:04:26.052127       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0111 09:04:26.052218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0111 09:04:27.552072       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:04:27.552105       1 metrics.go:72] Registering metrics
	I0111 09:04:27.552182       1 controller.go:711] "Syncing nftables rules"
	I0111 09:04:36.057378       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:04:36.057445       1 main.go:301] handling current node
	
	
	==> kube-apiserver [be3cae0859a767cb1d810c075beaa74a4697bb90f12bf159bea72e3e87da79a6] <==
	I0111 09:03:54.169489       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0111 09:03:54.409205       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:03:54.428547       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 09:03:54.450932       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0111 09:03:54.450961       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0111 09:03:54.452042       1 shared_informer.go:318] Caches are synced for configmaps
	I0111 09:03:54.452154       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0111 09:03:54.452192       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0111 09:03:54.453781       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0111 09:03:54.470392       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0111 09:03:54.481929       1 aggregator.go:166] initial CRD sync complete...
	I0111 09:03:54.482023       1 autoregister_controller.go:141] Starting autoregister controller
	I0111 09:03:54.482054       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 09:03:54.482085       1 cache.go:39] Caches are synced for autoregister controller
	I0111 09:03:55.036261       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0111 09:03:56.396651       1 controller.go:624] quota admission added evaluator for: namespaces
	I0111 09:03:56.449896       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0111 09:03:56.475437       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:03:56.484811       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:03:56.494931       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0111 09:03:56.556079       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.163.67"}
	I0111 09:03:56.574033       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.45.113"}
	I0111 09:04:07.203367       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0111 09:04:07.270948       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:04:07.273651       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [da8138b59df8207b192f5696b4f20a5ebb599324657398e62b0076ccd122e19f] <==
	I0111 09:04:07.282297       1 shared_informer.go:318] Caches are synced for resource quota
	I0111 09:04:07.326249       1 shared_informer.go:318] Caches are synced for attach detach
	I0111 09:04:07.336499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.013266ms"
	I0111 09:04:07.338703       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-cnrhh"
	I0111 09:04:07.340508       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4xxq2"
	I0111 09:04:07.340620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.698451ms"
	I0111 09:04:07.356797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.398441ms"
	I0111 09:04:07.365126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="137.96308ms"
	I0111 09:04:07.377850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.518173ms"
	I0111 09:04:07.377928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="38.064µs"
	I0111 09:04:07.388693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.245404ms"
	I0111 09:04:07.388812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.134µs"
	I0111 09:04:07.402014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="147.407µs"
	I0111 09:04:07.711231       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 09:04:07.734432       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 09:04:07.734462       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0111 09:04:13.279741       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.444987ms"
	I0111 09:04:13.279845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.412µs"
	I0111 09:04:17.277255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.354µs"
	I0111 09:04:18.278103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.752µs"
	I0111 09:04:19.278573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.788µs"
	I0111 09:04:26.965367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.842623ms"
	I0111 09:04:26.966096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="150.812µs"
	I0111 09:04:28.306396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.313µs"
	I0111 09:04:37.671294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.22µs"
	
	
	==> kube-proxy [1d23c38218c09008ed0624126143b92ef4ae15746f4fb4fec5a67590f7b14aaf] <==
	I0111 09:03:55.706440       1 server_others.go:69] "Using iptables proxy"
	I0111 09:03:55.746979       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0111 09:03:55.807391       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:03:55.809177       1 server_others.go:152] "Using iptables Proxier"
	I0111 09:03:55.809210       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0111 09:03:55.809222       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0111 09:03:55.809244       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0111 09:03:55.809432       1 server.go:846] "Version info" version="v1.28.0"
	I0111 09:03:55.809441       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:03:55.830861       1 config.go:188] "Starting service config controller"
	I0111 09:03:55.830885       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0111 09:03:55.830915       1 config.go:97] "Starting endpoint slice config controller"
	I0111 09:03:55.830919       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0111 09:03:55.831315       1 config.go:315] "Starting node config controller"
	I0111 09:03:55.831322       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0111 09:03:55.938232       1 shared_informer.go:318] Caches are synced for node config
	I0111 09:03:55.940560       1 shared_informer.go:318] Caches are synced for service config
	I0111 09:03:55.940594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8df31809024e57fa523fb773427e68d11f877dc356c992cd0201a1b33573775d] <==
	I0111 09:03:53.760721       1 serving.go:348] Generated self-signed cert in-memory
	I0111 09:03:54.547886       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0111 09:03:54.554238       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:03:54.568499       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0111 09:03:54.569141       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0111 09:03:54.569171       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:03:54.569378       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0111 09:03:54.569197       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0111 09:03:54.569459       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0111 09:03:54.569211       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0111 09:03:54.569877       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0111 09:03:54.669551       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0111 09:03:54.669684       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0111 09:03:54.670730       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.356948     792 topology_manager.go:215] "Topology Admit Handler" podUID="ab919fee-d1a7-4612-9a7b-adf934b0d7c4" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-cnrhh"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.464131     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab919fee-d1a7-4612-9a7b-adf934b0d7c4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-cnrhh\" (UID: \"ab919fee-d1a7-4612-9a7b-adf934b0d7c4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cnrhh"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.464194     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvftm\" (UniqueName: \"kubernetes.io/projected/ab919fee-d1a7-4612-9a7b-adf934b0d7c4-kube-api-access-mvftm\") pod \"kubernetes-dashboard-8694d4445c-cnrhh\" (UID: \"ab919fee-d1a7-4612-9a7b-adf934b0d7c4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cnrhh"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.464223     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9fa86067-a09c-407a-a141-9dc159038379-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4xxq2\" (UID: \"9fa86067-a09c-407a-a141-9dc159038379\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.464248     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrw65\" (UniqueName: \"kubernetes.io/projected/9fa86067-a09c-407a-a141-9dc159038379-kube-api-access-wrw65\") pod \"dashboard-metrics-scraper-5f989dc9cf-4xxq2\" (UID: \"9fa86067-a09c-407a-a141-9dc159038379\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: W0111 09:04:07.697133     792 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/crio-bffb8bc3abe740da2cbeb011c534b6039c33b4a88a06424abb91a5fda150c89e WatchSource:0}: Error finding container bffb8bc3abe740da2cbeb011c534b6039c33b4a88a06424abb91a5fda150c89e: Status 404 returned error can't find the container with id bffb8bc3abe740da2cbeb011c534b6039c33b4a88a06424abb91a5fda150c89e
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: W0111 09:04:07.704474     792 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/crio-069e662cb13297aed2df5077ddeb1b0ded5aae90a25dd7aef31464b758937a1f WatchSource:0}: Error finding container 069e662cb13297aed2df5077ddeb1b0ded5aae90a25dd7aef31464b758937a1f: Status 404 returned error can't find the container with id 069e662cb13297aed2df5077ddeb1b0ded5aae90a25dd7aef31464b758937a1f
	Jan 11 09:04:13 old-k8s-version-931581 kubelet[792]: I0111 09:04:13.264744     792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cnrhh" podStartSLOduration=1.728372427 podCreationTimestamp="2026-01-11 09:04:07 +0000 UTC" firstStartedPulling="2026-01-11 09:04:07.701363639 +0000 UTC m=+18.794396632" lastFinishedPulling="2026-01-11 09:04:12.237662068 +0000 UTC m=+23.330695062" observedRunningTime="2026-01-11 09:04:13.264443326 +0000 UTC m=+24.357476328" watchObservedRunningTime="2026-01-11 09:04:13.264670857 +0000 UTC m=+24.357703851"
	Jan 11 09:04:17 old-k8s-version-931581 kubelet[792]: I0111 09:04:17.253843     792 scope.go:117] "RemoveContainer" containerID="86efdb5e76da3e8c51e3525f636ebcf05ef7aa78015d25386f322dbc8c01f3e6"
	Jan 11 09:04:18 old-k8s-version-931581 kubelet[792]: I0111 09:04:18.258418     792 scope.go:117] "RemoveContainer" containerID="86efdb5e76da3e8c51e3525f636ebcf05ef7aa78015d25386f322dbc8c01f3e6"
	Jan 11 09:04:18 old-k8s-version-931581 kubelet[792]: I0111 09:04:18.259014     792 scope.go:117] "RemoveContainer" containerID="09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e"
	Jan 11 09:04:18 old-k8s-version-931581 kubelet[792]: E0111 09:04:18.259688     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4xxq2_kubernetes-dashboard(9fa86067-a09c-407a-a141-9dc159038379)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2" podUID="9fa86067-a09c-407a-a141-9dc159038379"
	Jan 11 09:04:19 old-k8s-version-931581 kubelet[792]: I0111 09:04:19.262236     792 scope.go:117] "RemoveContainer" containerID="09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e"
	Jan 11 09:04:19 old-k8s-version-931581 kubelet[792]: E0111 09:04:19.262995     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4xxq2_kubernetes-dashboard(9fa86067-a09c-407a-a141-9dc159038379)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2" podUID="9fa86067-a09c-407a-a141-9dc159038379"
	Jan 11 09:04:26 old-k8s-version-931581 kubelet[792]: I0111 09:04:26.277786     792 scope.go:117] "RemoveContainer" containerID="73d9a283074adb9ebdd703527aa0eca8069e6e36375607411f549fdc67d6fa8e"
	Jan 11 09:04:27 old-k8s-version-931581 kubelet[792]: I0111 09:04:27.653081     792 scope.go:117] "RemoveContainer" containerID="09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e"
	Jan 11 09:04:28 old-k8s-version-931581 kubelet[792]: I0111 09:04:28.287130     792 scope.go:117] "RemoveContainer" containerID="09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e"
	Jan 11 09:04:28 old-k8s-version-931581 kubelet[792]: I0111 09:04:28.287591     792 scope.go:117] "RemoveContainer" containerID="199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576"
	Jan 11 09:04:28 old-k8s-version-931581 kubelet[792]: E0111 09:04:28.288239     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4xxq2_kubernetes-dashboard(9fa86067-a09c-407a-a141-9dc159038379)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2" podUID="9fa86067-a09c-407a-a141-9dc159038379"
	Jan 11 09:04:37 old-k8s-version-931581 kubelet[792]: I0111 09:04:37.652885     792 scope.go:117] "RemoveContainer" containerID="199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576"
	Jan 11 09:04:37 old-k8s-version-931581 kubelet[792]: E0111 09:04:37.653246     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4xxq2_kubernetes-dashboard(9fa86067-a09c-407a-a141-9dc159038379)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2" podUID="9fa86067-a09c-407a-a141-9dc159038379"
	Jan 11 09:04:40 old-k8s-version-931581 kubelet[792]: I0111 09:04:40.873530     792 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 11 09:04:40 old-k8s-version-931581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:04:40 old-k8s-version-931581 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:04:40 old-k8s-version-931581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bd56401a5c752eae1d3614979ba146f00260bc3c39f492b048ec64ee36838966] <==
	2026/01/11 09:04:12 Using namespace: kubernetes-dashboard
	2026/01/11 09:04:12 Using in-cluster config to connect to apiserver
	2026/01/11 09:04:12 Using secret token for csrf signing
	2026/01/11 09:04:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 09:04:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 09:04:12 Successful initial request to the apiserver, version: v1.28.0
	2026/01/11 09:04:12 Generating JWE encryption key
	2026/01/11 09:04:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 09:04:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 09:04:13 Initializing JWE encryption key from synchronized object
	2026/01/11 09:04:13 Creating in-cluster Sidecar client
	2026/01/11 09:04:13 Serving insecurely on HTTP port: 9090
	2026/01/11 09:04:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:04:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:04:12 Starting overwatch
	
	
	==> storage-provisioner [73d9a283074adb9ebdd703527aa0eca8069e6e36375607411f549fdc67d6fa8e] <==
	I0111 09:03:55.794274       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 09:04:25.798598       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4] <==
	I0111 09:04:26.325266       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:04:26.338998       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:04:26.339055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0111 09:04:43.738895       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:04:43.740210       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a0480bd-74ad-46e7-a509-867a9d06bbdb", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-931581_292d6665-8ad3-4089-a831-198cca10d7f7 became leader
	I0111 09:04:43.740407       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-931581_292d6665-8ad3-4089-a831-198cca10d7f7!
	I0111 09:04:43.841024       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-931581_292d6665-8ad3-4089-a831-198cca10d7f7!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-931581 -n old-k8s-version-931581
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-931581 -n old-k8s-version-931581: exit status 2 (368.347799ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-931581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-931581
helpers_test.go:244: (dbg) docker inspect old-k8s-version-931581:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b",
	        "Created": "2026-01-11T09:02:21.912162594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 769087,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:03:42.121536557Z",
	            "FinishedAt": "2026-01-11T09:03:41.318381845Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/hostname",
	        "HostsPath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/hosts",
	        "LogPath": "/var/lib/docker/containers/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b-json.log",
	        "Name": "/old-k8s-version-931581",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-931581:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-931581",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b",
	                "LowerDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a13c8b1136b833866b5da78a40fb0aa10f6414034f887f96467846c64a4c542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-931581",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-931581/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-931581",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-931581",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-931581",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b2599b4a2f29ee1d2d93e1ba3e739d497fafb92840149cc10075758d2020696",
	            "SandboxKey": "/var/run/docker/netns/6b2599b4a2f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-931581": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:36:dd:d7:4c:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b56797f12ccaa56ea8e718a635d68c0d137f49a40ab56b2bf2b5a235f2e0cf2",
	                    "EndpointID": "0e3594e8b2d25b5e352ec06e5a6c339997754b94d98d8b3b5b1fdf9f27761917",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-931581",
	                        "93b661cce923"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-931581 -n old-k8s-version-931581
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-931581 -n old-k8s-version-931581: exit status 2 (380.032552ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-931581 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-931581 logs -n 25: (1.471416825s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-293572 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo containerd config dump                                                                                                                                                                                                  │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo crio config                                                                                                                                                                                                             │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ delete  │ -p cilium-293572                                                                                                                                                                                                                              │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:55 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:56 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ delete  │ -p cert-expiration-448134                                                                                                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ start   │ -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-630015 │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │                     │
	│ delete  │ -p force-systemd-env-472282                                                                                                                                                                                                                   │ force-systemd-env-472282  │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:01 UTC │
	│ start   │ -p cert-options-459267 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ cert-options-459267 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ -p cert-options-459267 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ delete  │ -p cert-options-459267                                                                                                                                                                                                                        │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │                     │
	│ stop    │ -p old-k8s-version-931581 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-931581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:04 UTC │
	│ image   │ old-k8s-version-931581 image list --format=json                                                                                                                                                                                               │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ pause   │ -p old-k8s-version-931581 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:03:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:03:41.836315  768959 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:03:41.836465  768959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:03:41.836477  768959 out.go:374] Setting ErrFile to fd 2...
	I0111 09:03:41.836483  768959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:03:41.836753  768959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:03:41.837132  768959 out.go:368] Setting JSON to false
	I0111 09:03:41.837979  768959 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13572,"bootTime":1768108650,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:03:41.838051  768959 start.go:143] virtualization:  
	I0111 09:03:41.841213  768959 out.go:179] * [old-k8s-version-931581] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:03:41.845082  768959 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:03:41.845163  768959 notify.go:221] Checking for updates...
	I0111 09:03:41.851082  768959 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:03:41.854080  768959 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:03:41.856984  768959 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:03:41.859877  768959 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:03:41.862689  768959 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:03:41.866221  768959 config.go:182] Loaded profile config "old-k8s-version-931581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 09:03:41.869589  768959 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I0111 09:03:41.872376  768959 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:03:41.903262  768959 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:03:41.903391  768959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:03:41.966578  768959 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:03:41.949654596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:03:41.966680  768959 docker.go:319] overlay module found
	I0111 09:03:41.971676  768959 out.go:179] * Using the docker driver based on existing profile
	I0111 09:03:41.974512  768959 start.go:309] selected driver: docker
	I0111 09:03:41.974538  768959 start.go:928] validating driver "docker" against &{Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:03:41.974641  768959 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:03:41.975370  768959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:03:42.026007  768959 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:03:42.016249935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:03:42.026579  768959 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:03:42.026618  768959 cni.go:84] Creating CNI manager for ""
	I0111 09:03:42.026674  768959 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:03:42.026711  768959 start.go:353] cluster config:
	{Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:03:42.031660  768959 out.go:179] * Starting "old-k8s-version-931581" primary control-plane node in "old-k8s-version-931581" cluster
	I0111 09:03:42.034482  768959 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:03:42.037450  768959 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:03:42.040392  768959 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 09:03:42.040451  768959 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:03:42.040465  768959 cache.go:65] Caching tarball of preloaded images
	I0111 09:03:42.040555  768959 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:03:42.040568  768959 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0111 09:03:42.040713  768959 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/config.json ...
	I0111 09:03:42.040948  768959 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:03:42.061475  768959 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:03:42.061500  768959 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:03:42.061522  768959 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:03:42.061555  768959 start.go:360] acquireMachinesLock for old-k8s-version-931581: {Name:mkab3bc7162aba2e88171e4e683a8fd13db4db95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:03:42.061630  768959 start.go:364] duration metric: took 53.26µs to acquireMachinesLock for "old-k8s-version-931581"
	I0111 09:03:42.061652  768959 start.go:96] Skipping create...Using existing machine configuration
	I0111 09:03:42.061657  768959 fix.go:54] fixHost starting: 
	I0111 09:03:42.061916  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:42.081123  768959 fix.go:112] recreateIfNeeded on old-k8s-version-931581: state=Stopped err=<nil>
	W0111 09:03:42.081167  768959 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 09:03:42.084588  768959 out.go:252] * Restarting existing docker container for "old-k8s-version-931581" ...
	I0111 09:03:42.084704  768959 cli_runner.go:164] Run: docker start old-k8s-version-931581
	I0111 09:03:42.376608  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:42.400088  768959 kic.go:430] container "old-k8s-version-931581" state is running.
	I0111 09:03:42.400476  768959 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-931581
	I0111 09:03:42.426388  768959 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/config.json ...
	I0111 09:03:42.426725  768959 machine.go:94] provisionDockerMachine start ...
	I0111 09:03:42.426905  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:42.457819  768959 main.go:144] libmachine: Using SSH client type: native
	I0111 09:03:42.458184  768959 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33788 <nil> <nil>}
	I0111 09:03:42.458199  768959 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:03:42.459226  768959 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37394->127.0.0.1:33788: read: connection reset by peer
	I0111 09:03:45.609748  768959 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-931581
	
	I0111 09:03:45.609802  768959 ubuntu.go:182] provisioning hostname "old-k8s-version-931581"
	I0111 09:03:45.609868  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:45.628076  768959 main.go:144] libmachine: Using SSH client type: native
	I0111 09:03:45.628405  768959 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33788 <nil> <nil>}
	I0111 09:03:45.628425  768959 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-931581 && echo "old-k8s-version-931581" | sudo tee /etc/hostname
	I0111 09:03:45.791762  768959 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-931581
	
	I0111 09:03:45.791929  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:45.809895  768959 main.go:144] libmachine: Using SSH client type: native
	I0111 09:03:45.810252  768959 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33788 <nil> <nil>}
	I0111 09:03:45.810277  768959 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-931581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-931581/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-931581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:03:45.958399  768959 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:03:45.958426  768959 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:03:45.958457  768959 ubuntu.go:190] setting up certificates
	I0111 09:03:45.958466  768959 provision.go:84] configureAuth start
	I0111 09:03:45.958548  768959 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-931581
	I0111 09:03:45.976582  768959 provision.go:143] copyHostCerts
	I0111 09:03:45.976665  768959 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:03:45.976687  768959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:03:45.976773  768959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:03:45.976887  768959 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:03:45.976899  768959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:03:45.976926  768959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:03:45.976991  768959 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:03:45.977000  768959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:03:45.977024  768959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:03:45.977087  768959 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-931581 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-931581]
	I0111 09:03:46.333789  768959 provision.go:177] copyRemoteCerts
	I0111 09:03:46.333865  768959 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:03:46.333912  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:46.353564  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:46.457925  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:03:46.475048  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0111 09:03:46.493352  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 09:03:46.511639  768959 provision.go:87] duration metric: took 553.151074ms to configureAuth
	I0111 09:03:46.511665  768959 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:03:46.511859  768959 config.go:182] Loaded profile config "old-k8s-version-931581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 09:03:46.511973  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:46.529339  768959 main.go:144] libmachine: Using SSH client type: native
	I0111 09:03:46.529664  768959 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33788 <nil> <nil>}
	I0111 09:03:46.529685  768959 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:03:46.879368  768959 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:03:46.879396  768959 machine.go:97] duration metric: took 4.452656304s to provisionDockerMachine
	I0111 09:03:46.879418  768959 start.go:293] postStartSetup for "old-k8s-version-931581" (driver="docker")
	I0111 09:03:46.879429  768959 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:03:46.879498  768959 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:03:46.879553  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:46.903551  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:47.014691  768959 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:03:47.018033  768959 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:03:47.018059  768959 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:03:47.018070  768959 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:03:47.018148  768959 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:03:47.018235  768959 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:03:47.018332  768959 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:03:47.025783  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:03:47.043328  768959 start.go:296] duration metric: took 163.894034ms for postStartSetup
	I0111 09:03:47.043425  768959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:03:47.043473  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:47.059893  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:47.163192  768959 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:03:47.167713  768959 fix.go:56] duration metric: took 5.106048651s for fixHost
	I0111 09:03:47.167741  768959 start.go:83] releasing machines lock for "old-k8s-version-931581", held for 5.106102255s
	I0111 09:03:47.167810  768959 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-931581
	I0111 09:03:47.184096  768959 ssh_runner.go:195] Run: cat /version.json
	I0111 09:03:47.184146  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:47.184219  768959 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:03:47.184284  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:47.206565  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:47.207757  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:47.406544  768959 ssh_runner.go:195] Run: systemctl --version
	I0111 09:03:47.413198  768959 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:03:47.447845  768959 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:03:47.452193  768959 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:03:47.452266  768959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:03:47.459966  768959 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 09:03:47.459988  768959 start.go:496] detecting cgroup driver to use...
	I0111 09:03:47.460020  768959 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:03:47.460067  768959 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:03:47.475325  768959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:03:47.488313  768959 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:03:47.488396  768959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:03:47.504151  768959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:03:47.517276  768959 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:03:47.622977  768959 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:03:47.761195  768959 docker.go:234] disabling docker service ...
	I0111 09:03:47.761321  768959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:03:47.776041  768959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:03:47.789317  768959 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:03:47.898339  768959 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:03:48.018575  768959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:03:48.033148  768959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:03:48.047901  768959 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0111 09:03:48.047984  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.057235  768959 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:03:48.057322  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.066966  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.076014  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.085097  768959 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:03:48.093004  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.102754  768959 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.111763  768959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:03:48.120640  768959 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:03:48.128218  768959 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:03:48.135896  768959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:03:48.251733  768959 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:03:48.424781  768959 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:03:48.424852  768959 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:03:48.428764  768959 start.go:574] Will wait 60s for crictl version
	I0111 09:03:48.428835  768959 ssh_runner.go:195] Run: which crictl
	I0111 09:03:48.432964  768959 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:03:48.465091  768959 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:03:48.465188  768959 ssh_runner.go:195] Run: crio --version
	I0111 09:03:48.499257  768959 ssh_runner.go:195] Run: crio --version
	I0111 09:03:48.537060  768959 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I0111 09:03:48.539871  768959 cli_runner.go:164] Run: docker network inspect old-k8s-version-931581 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:03:48.556782  768959 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:03:48.560729  768959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:03:48.570986  768959 kubeadm.go:884] updating cluster {Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:03:48.571106  768959 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 09:03:48.571157  768959 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:03:48.615603  768959 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:03:48.615629  768959 crio.go:433] Images already preloaded, skipping extraction
	I0111 09:03:48.615687  768959 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:03:48.640641  768959 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:03:48.640666  768959 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:03:48.640675  768959 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I0111 09:03:48.640770  768959 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-931581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:03:48.640858  768959 ssh_runner.go:195] Run: crio config
	I0111 09:03:48.706529  768959 cni.go:84] Creating CNI manager for ""
	I0111 09:03:48.706554  768959 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:03:48.706572  768959 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:03:48.706595  768959 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-931581 NodeName:old-k8s-version-931581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:03:48.706735  768959 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-931581"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:03:48.706815  768959 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0111 09:03:48.714547  768959 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:03:48.714619  768959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:03:48.722252  768959 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0111 09:03:48.734823  768959 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:03:48.748025  768959 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0111 09:03:48.760730  768959 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:03:48.764176  768959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:03:48.773591  768959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:03:48.881346  768959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:03:48.900245  768959 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581 for IP: 192.168.85.2
	I0111 09:03:48.900321  768959 certs.go:195] generating shared ca certs ...
	I0111 09:03:48.900354  768959 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:48.900560  768959 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:03:48.900678  768959 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:03:48.900707  768959 certs.go:257] generating profile certs ...
	I0111 09:03:48.900864  768959 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.key
	I0111 09:03:48.900982  768959 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key.eb6f276c
	I0111 09:03:48.901064  768959 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.key
	I0111 09:03:48.901225  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:03:48.901292  768959 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:03:48.901317  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:03:48.901380  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:03:48.901443  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:03:48.901510  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:03:48.901603  768959 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:03:48.903921  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:03:48.928157  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:03:48.948748  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:03:48.967884  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:03:48.987759  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0111 09:03:49.008317  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 09:03:49.025818  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:03:49.045747  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 09:03:49.083444  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:03:49.103763  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:03:49.133045  768959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:03:49.155528  768959 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:03:49.169891  768959 ssh_runner.go:195] Run: openssl version
	I0111 09:03:49.176436  768959 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:03:49.189097  768959 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:03:49.197914  768959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:03:49.202202  768959 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:03:49.202315  768959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:03:49.244986  768959 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:03:49.255921  768959 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:03:49.263352  768959 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:03:49.271071  768959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:03:49.274984  768959 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:03:49.275049  768959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:03:49.316937  768959 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:03:49.324318  768959 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:03:49.331831  768959 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:03:49.339547  768959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:03:49.343102  768959 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:03:49.343171  768959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:03:49.384086  768959 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:03:49.391471  768959 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:03:49.395056  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 09:03:49.435845  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 09:03:49.476812  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 09:03:49.519560  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 09:03:49.569900  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 09:03:49.622002  768959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 09:03:49.686292  768959 kubeadm.go:401] StartCluster: {Name:old-k8s-version-931581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-931581 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:03:49.686446  768959 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:03:49.686568  768959 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:03:49.747563  768959 cri.go:96] found id: "3d8f07a9089011370ce578f9e96c1fca727ce96da1e432ccd926d97b1ea3545e"
	I0111 09:03:49.747636  768959 cri.go:96] found id: "8df31809024e57fa523fb773427e68d11f877dc356c992cd0201a1b33573775d"
	I0111 09:03:49.747654  768959 cri.go:96] found id: "be3cae0859a767cb1d810c075beaa74a4697bb90f12bf159bea72e3e87da79a6"
	I0111 09:03:49.747675  768959 cri.go:96] found id: "da8138b59df8207b192f5696b4f20a5ebb599324657398e62b0076ccd122e19f"
	I0111 09:03:49.747716  768959 cri.go:96] found id: ""
	I0111 09:03:49.747810  768959 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 09:03:49.765325  768959 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:03:49Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:03:49.765461  768959 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:03:49.783209  768959 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 09:03:49.783294  768959 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 09:03:49.783385  768959 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 09:03:49.795110  768959 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 09:03:49.795617  768959 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-931581" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:03:49.795767  768959 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-575040/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-931581" cluster setting kubeconfig missing "old-k8s-version-931581" context setting]
	I0111 09:03:49.796114  768959 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:49.797748  768959 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 09:03:49.816928  768959 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0111 09:03:49.817013  768959 kubeadm.go:602] duration metric: took 33.69811ms to restartPrimaryControlPlane
	I0111 09:03:49.817038  768959 kubeadm.go:403] duration metric: took 130.756021ms to StartCluster
	I0111 09:03:49.817083  768959 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:49.817193  768959 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:03:49.817968  768959 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:03:49.818352  768959 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:03:49.819014  768959 config.go:182] Loaded profile config "old-k8s-version-931581": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0111 09:03:49.819153  768959 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:03:49.819285  768959 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-931581"
	I0111 09:03:49.819336  768959 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-931581"
	W0111 09:03:49.819357  768959 addons.go:248] addon storage-provisioner should already be in state true
	I0111 09:03:49.819413  768959 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:03:49.820048  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:49.820272  768959 addons.go:70] Setting dashboard=true in profile "old-k8s-version-931581"
	I0111 09:03:49.820311  768959 addons.go:239] Setting addon dashboard=true in "old-k8s-version-931581"
	W0111 09:03:49.820349  768959 addons.go:248] addon dashboard should already be in state true
	I0111 09:03:49.820393  768959 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:03:49.820909  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:49.821434  768959 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-931581"
	I0111 09:03:49.821499  768959 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-931581"
	I0111 09:03:49.821783  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:49.830205  768959 out.go:179] * Verifying Kubernetes components...
	I0111 09:03:49.838896  768959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:03:49.882687  768959 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 09:03:49.882876  768959 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:03:49.884888  768959 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-931581"
	W0111 09:03:49.884921  768959 addons.go:248] addon default-storageclass should already be in state true
	I0111 09:03:49.884951  768959 host.go:66] Checking if "old-k8s-version-931581" exists ...
	I0111 09:03:49.885405  768959 cli_runner.go:164] Run: docker container inspect old-k8s-version-931581 --format={{.State.Status}}
	I0111 09:03:49.886658  768959 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:03:49.886679  768959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:03:49.886736  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:49.891458  768959 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 09:03:49.894306  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 09:03:49.894339  768959 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 09:03:49.894407  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:49.922367  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:49.943114  768959 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:03:49.943136  768959 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:03:49.943197  768959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-931581
	I0111 09:03:49.957880  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:49.980970  768959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33788 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/old-k8s-version-931581/id_rsa Username:docker}
	I0111 09:03:50.228536  768959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:03:50.240273  768959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:03:50.253204  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 09:03:50.253230  768959 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 09:03:50.290634  768959 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-931581" to be "Ready" ...
	I0111 09:03:50.315659  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 09:03:50.315685  768959 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 09:03:50.356632  768959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:03:50.406675  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 09:03:50.406702  768959 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 09:03:50.481247  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 09:03:50.481273  768959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 09:03:50.565996  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 09:03:50.566021  768959 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 09:03:50.593296  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 09:03:50.593323  768959 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 09:03:50.610608  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 09:03:50.610635  768959 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 09:03:50.634678  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 09:03:50.634705  768959 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 09:03:50.655115  768959 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:03:50.655142  768959 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 09:03:50.673167  768959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:03:54.362830  768959 node_ready.go:49] node "old-k8s-version-931581" is "Ready"
	I0111 09:03:54.362915  768959 node_ready.go:38] duration metric: took 4.072245808s for node "old-k8s-version-931581" to be "Ready" ...
	I0111 09:03:54.362965  768959 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:03:54.363070  768959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:03:56.034391  768959 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.794078106s)
	I0111 09:03:56.034441  768959 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.67778659s)
	I0111 09:03:56.580676  768959 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.90746225s)
	I0111 09:03:56.580764  768959 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.21759409s)
	I0111 09:03:56.580839  768959 api_server.go:72] duration metric: took 6.762420682s to wait for apiserver process to appear ...
	I0111 09:03:56.580847  768959 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:03:56.580867  768959 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:03:56.583725  768959 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-931581 addons enable metrics-server
	
	I0111 09:03:56.586595  768959 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I0111 09:03:56.589548  768959 addons.go:530] duration metric: took 6.770412998s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0111 09:03:56.590498  768959 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 09:03:56.592242  768959 api_server.go:141] control plane version: v1.28.0
	I0111 09:03:56.592272  768959 api_server.go:131] duration metric: took 11.417387ms to wait for apiserver health ...
	I0111 09:03:56.592282  768959 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:03:56.596252  768959 system_pods.go:59] 8 kube-system pods found
	I0111 09:03:56.596298  768959 system_pods.go:61] "coredns-5dd5756b68-2gkt5" [fed76c30-7304-4890-9b21-67f48729cb7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:03:56.596309  768959 system_pods.go:61] "etcd-old-k8s-version-931581" [2846557f-ef29-426a-9620-d7182b3d2e5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:03:56.596314  768959 system_pods.go:61] "kindnet-vl8hm" [1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3] Running
	I0111 09:03:56.596321  768959 system_pods.go:61] "kube-apiserver-old-k8s-version-931581" [8a8af346-ef92-4b59-9a35-5bcfa837543f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:03:56.596328  768959 system_pods.go:61] "kube-controller-manager-old-k8s-version-931581" [9cd07045-f552-41ef-8ea6-c2584ba61279] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:03:56.596339  768959 system_pods.go:61] "kube-proxy-xg9bv" [489cf8f4-64d7-44c0-b233-c8235d397932] Running
	I0111 09:03:56.596353  768959 system_pods.go:61] "kube-scheduler-old-k8s-version-931581" [8feb9a4e-b8ad-473e-ac05-2ce9ed02a7d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:03:56.596358  768959 system_pods.go:61] "storage-provisioner" [d7c7d49d-3c49-49aa-97c5-9692e0c23d99] Running
	I0111 09:03:56.596365  768959 system_pods.go:74] duration metric: took 4.058809ms to wait for pod list to return data ...
	I0111 09:03:56.596377  768959 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:03:56.598939  768959 default_sa.go:45] found service account: "default"
	I0111 09:03:56.598968  768959 default_sa.go:55] duration metric: took 2.58515ms for default service account to be created ...
	I0111 09:03:56.598978  768959 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:03:56.602716  768959 system_pods.go:86] 8 kube-system pods found
	I0111 09:03:56.602748  768959 system_pods.go:89] "coredns-5dd5756b68-2gkt5" [fed76c30-7304-4890-9b21-67f48729cb7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:03:56.602758  768959 system_pods.go:89] "etcd-old-k8s-version-931581" [2846557f-ef29-426a-9620-d7182b3d2e5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:03:56.602764  768959 system_pods.go:89] "kindnet-vl8hm" [1365f268-9ad9-4a72-9e9b-31f4e6c7a3e3] Running
	I0111 09:03:56.602777  768959 system_pods.go:89] "kube-apiserver-old-k8s-version-931581" [8a8af346-ef92-4b59-9a35-5bcfa837543f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:03:56.602785  768959 system_pods.go:89] "kube-controller-manager-old-k8s-version-931581" [9cd07045-f552-41ef-8ea6-c2584ba61279] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:03:56.602791  768959 system_pods.go:89] "kube-proxy-xg9bv" [489cf8f4-64d7-44c0-b233-c8235d397932] Running
	I0111 09:03:56.602804  768959 system_pods.go:89] "kube-scheduler-old-k8s-version-931581" [8feb9a4e-b8ad-473e-ac05-2ce9ed02a7d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:03:56.602809  768959 system_pods.go:89] "storage-provisioner" [d7c7d49d-3c49-49aa-97c5-9692e0c23d99] Running
	I0111 09:03:56.602816  768959 system_pods.go:126] duration metric: took 3.832985ms to wait for k8s-apps to be running ...
	I0111 09:03:56.602828  768959 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:03:56.602887  768959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:03:56.616966  768959 system_svc.go:56] duration metric: took 14.127633ms WaitForService to wait for kubelet
	I0111 09:03:56.617001  768959 kubeadm.go:587] duration metric: took 6.798581535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:03:56.617021  768959 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:03:56.620291  768959 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:03:56.620328  768959 node_conditions.go:123] node cpu capacity is 2
	I0111 09:03:56.620345  768959 node_conditions.go:105] duration metric: took 3.314587ms to run NodePressure ...
	I0111 09:03:56.620359  768959 start.go:242] waiting for startup goroutines ...
	I0111 09:03:56.620367  768959 start.go:247] waiting for cluster config update ...
	I0111 09:03:56.620383  768959 start.go:256] writing updated cluster config ...
	I0111 09:03:56.620674  768959 ssh_runner.go:195] Run: rm -f paused
	I0111 09:03:56.624719  768959 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:03:56.632869  768959 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-2gkt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:00.598421  757749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000473872s
	I0111 09:04:00.598459  757749 kubeadm.go:319] 
	I0111 09:04:00.598526  757749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 09:04:00.598567  757749 kubeadm.go:319] 	- The kubelet is not running
	I0111 09:04:00.598685  757749 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 09:04:00.598696  757749 kubeadm.go:319] 
	I0111 09:04:00.598811  757749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 09:04:00.598848  757749 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 09:04:00.598889  757749 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 09:04:00.598899  757749 kubeadm.go:319] 
	I0111 09:04:00.609837  757749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:04:00.610361  757749 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:04:00.610477  757749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:04:00.610770  757749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 09:04:00.610789  757749 kubeadm.go:319] 
	I0111 09:04:00.610865  757749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0111 09:04:00.611020  757749 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-630015 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000473872s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 09:04:00.611112  757749 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0111 09:04:01.030167  757749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:04:01.043832  757749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 09:04:01.043904  757749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 09:04:01.052245  757749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 09:04:01.052265  757749 kubeadm.go:158] found existing configuration files:
	
	I0111 09:04:01.052317  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 09:04:01.060474  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 09:04:01.060546  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 09:04:01.068369  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 09:04:01.076442  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 09:04:01.076507  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 09:04:01.084958  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 09:04:01.093001  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 09:04:01.093111  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 09:04:01.104919  757749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 09:04:01.116271  757749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 09:04:01.116345  757749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 09:04:01.125437  757749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 09:04:01.180127  757749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 09:04:01.180214  757749 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 09:04:01.263691  757749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 09:04:01.263771  757749 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 09:04:01.263812  757749 kubeadm.go:319] OS: Linux
	I0111 09:04:01.263863  757749 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 09:04:01.263922  757749 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 09:04:01.263981  757749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 09:04:01.264035  757749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 09:04:01.264089  757749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 09:04:01.264142  757749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 09:04:01.264192  757749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 09:04:01.264249  757749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 09:04:01.264301  757749 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 09:04:01.332134  757749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 09:04:01.332257  757749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 09:04:01.332354  757749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 09:04:01.340058  757749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0111 09:03:58.638897  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:00.642798  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	I0111 09:04:01.345221  757749 out.go:252]   - Generating certificates and keys ...
	I0111 09:04:01.345313  757749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 09:04:01.345383  757749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 09:04:01.345459  757749 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 09:04:01.345520  757749 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 09:04:01.345591  757749 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 09:04:01.345645  757749 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 09:04:01.345707  757749 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 09:04:01.345769  757749 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 09:04:01.346280  757749 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 09:04:01.346736  757749 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 09:04:01.347161  757749 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 09:04:01.347255  757749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 09:04:02.153749  757749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 09:04:02.549592  757749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 09:04:02.718485  757749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 09:04:03.108587  757749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 09:04:03.292500  757749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 09:04:03.293149  757749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 09:04:03.295747  757749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W0111 09:04:03.138882  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:05.139310  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	I0111 09:04:03.298929  757749 out.go:252]   - Booting up control plane ...
	I0111 09:04:03.299039  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 09:04:03.299122  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 09:04:03.300853  757749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 09:04:03.316699  757749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 09:04:03.316810  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 09:04:03.325052  757749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 09:04:03.325437  757749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 09:04:03.325591  757749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:04:03.470373  757749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 09:04:03.470503  757749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W0111 09:04:07.163015  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:09.639708  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:11.640257  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:14.139409  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:16.139540  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:18.638378  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:20.639219  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:23.138715  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	W0111 09:04:25.139724  768959 pod_ready.go:104] pod "coredns-5dd5756b68-2gkt5" is not "Ready", error: <nil>
	I0111 09:04:27.139152  768959 pod_ready.go:94] pod "coredns-5dd5756b68-2gkt5" is "Ready"
	I0111 09:04:27.139183  768959 pod_ready.go:86] duration metric: took 30.506287443s for pod "coredns-5dd5756b68-2gkt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.143272  768959 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.148395  768959 pod_ready.go:94] pod "etcd-old-k8s-version-931581" is "Ready"
	I0111 09:04:27.148420  768959 pod_ready.go:86] duration metric: took 5.118548ms for pod "etcd-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.151604  768959 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.156952  768959 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-931581" is "Ready"
	I0111 09:04:27.156984  768959 pod_ready.go:86] duration metric: took 5.350797ms for pod "kube-apiserver-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.160374  768959 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.337279  768959 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-931581" is "Ready"
	I0111 09:04:27.337306  768959 pod_ready.go:86] duration metric: took 176.903267ms for pod "kube-controller-manager-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.538349  768959 pod_ready.go:83] waiting for pod "kube-proxy-xg9bv" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:27.936908  768959 pod_ready.go:94] pod "kube-proxy-xg9bv" is "Ready"
	I0111 09:04:27.936939  768959 pod_ready.go:86] duration metric: took 398.563765ms for pod "kube-proxy-xg9bv" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:28.138352  768959 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:28.537008  768959 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-931581" is "Ready"
	I0111 09:04:28.537037  768959 pod_ready.go:86] duration metric: took 398.612322ms for pod "kube-scheduler-old-k8s-version-931581" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:04:28.537050  768959 pod_ready.go:40] duration metric: took 31.912293732s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:04:28.591854  768959 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I0111 09:04:28.595112  768959 out.go:203] 
	W0111 09:04:28.598081  768959 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I0111 09:04:28.601082  768959 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:04:28.604158  768959 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-931581" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 09:04:26 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:26.306287466Z" level=info msg="Created container e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4: kube-system/storage-provisioner/storage-provisioner" id=c04d1d04-cc18-4e75-8337-b49154d32717 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:04:26 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:26.306949522Z" level=info msg="Starting container: e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4" id=bccd96b7-30e5-4eb8-84c9-ed01901c9edb name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:04:26 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:26.308764413Z" level=info msg="Started container" PID=1680 containerID=e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4 description=kube-system/storage-provisioner/storage-provisioner id=bccd96b7-30e5-4eb8-84c9-ed01901c9edb name=/runtime.v1.RuntimeService/StartContainer sandboxID=06de23e6fe6b6bba446a3779a4d29d1906e61285af1dd7f3f7e84f78426f901e
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.653723391Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8eb90bce-5049-41c7-8356-55330fdbfdbe name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.654662382Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=36495cd5-2161-4a4a-8b41-e7a98ced1233 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.655679059Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2/dashboard-metrics-scraper" id=091bb177-afd5-4730-92e2-d09f7d7ef323 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.655808628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.662221429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.662770064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.680174137Z" level=info msg="Created container 199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2/dashboard-metrics-scraper" id=091bb177-afd5-4730-92e2-d09f7d7ef323 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.68102575Z" level=info msg="Starting container: 199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576" id=c1e7433a-9569-4384-898f-e7665b62f2f9 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:04:27 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:27.683469515Z" level=info msg="Started container" PID=1695 containerID=199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2/dashboard-metrics-scraper id=c1e7433a-9569-4384-898f-e7665b62f2f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=069e662cb13297aed2df5077ddeb1b0ded5aae90a25dd7aef31464b758937a1f
	Jan 11 09:04:27 old-k8s-version-931581 conmon[1693]: conmon 199afbf4b56c27ef4457 <ninfo>: container 1695 exited with status 1
	Jan 11 09:04:28 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:28.28847287Z" level=info msg="Removing container: 09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e" id=b2a0a8b4-4ff1-406f-88d2-58dfe03a4fee name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:04:28 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:28.298464203Z" level=info msg="Error loading conmon cgroup of container 09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e: cgroup deleted" id=b2a0a8b4-4ff1-406f-88d2-58dfe03a4fee name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:04:28 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:28.303644865Z" level=info msg="Removed container 09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2/dashboard-metrics-scraper" id=b2a0a8b4-4ff1-406f-88d2-58dfe03a4fee name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.067062977Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.067102912Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.072452954Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.07248996Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.077037235Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.077075972Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.077104264Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.081672503Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:04:36 old-k8s-version-931581 crio[664]: time="2026-01-11T09:04:36.081709443Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	199afbf4b56c2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   069e662cb1329       dashboard-metrics-scraper-5f989dc9cf-4xxq2       kubernetes-dashboard
	e2c5f1a589123       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   06de23e6fe6b6       storage-provisioner                              kube-system
	bd56401a5c752       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   bffb8bc3abe74       kubernetes-dashboard-8694d4445c-cnrhh            kubernetes-dashboard
	b727470771749       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           50 seconds ago      Running             coredns                     1                   3a22c43f052cc       coredns-5dd5756b68-2gkt5                         kube-system
	00df60b332b70       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   cfe5770e056c5       busybox                                          default
	73d9a283074ad       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   06de23e6fe6b6       storage-provisioner                              kube-system
	3a849cb62cfb1       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           50 seconds ago      Running             kindnet-cni                 1                   2179f8076dfd6       kindnet-vl8hm                                    kube-system
	1d23c38218c09       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           50 seconds ago      Running             kube-proxy                  1                   563fc471abaf2       kube-proxy-xg9bv                                 kube-system
	3d8f07a908901       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           56 seconds ago      Running             etcd                        1                   19b70a2b5b8d3       etcd-old-k8s-version-931581                      kube-system
	8df31809024e5       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           56 seconds ago      Running             kube-scheduler              1                   7e2b7d3286e45       kube-scheduler-old-k8s-version-931581            kube-system
	be3cae0859a76       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           56 seconds ago      Running             kube-apiserver              1                   17fcc74ee9ebc       kube-apiserver-old-k8s-version-931581            kube-system
	da8138b59df82       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           56 seconds ago      Running             kube-controller-manager     1                   2c5ccd8484c4a       kube-controller-manager-old-k8s-version-931581   kube-system
	
	
	==> coredns [b727470771749969490fec69bb6f9cc8d254a874166542b81d6c9dc796246f68] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41102 - 41255 "HINFO IN 2972062232650211598.4557670974590327025. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005469954s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-931581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-931581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=old-k8s-version-931581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_02_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:02:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-931581
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:04:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:04:24 +0000   Sun, 11 Jan 2026 09:02:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:04:24 +0000   Sun, 11 Jan 2026 09:02:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:04:24 +0000   Sun, 11 Jan 2026 09:02:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:04:24 +0000   Sun, 11 Jan 2026 09:03:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-931581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                af69ca9e-bf38-4107-aa6e-3001379de44e
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-2gkt5                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-old-k8s-version-931581                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-vl8hm                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-931581             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-931581    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-xg9bv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-931581             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4xxq2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-cnrhh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node old-k8s-version-931581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node old-k8s-version-931581 event: Registered Node old-k8s-version-931581 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-931581 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node old-k8s-version-931581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node old-k8s-version-931581 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node old-k8s-version-931581 event: Registered Node old-k8s-version-931581 in Controller
	
	
	==> dmesg <==
	[Jan11 08:30] overlayfs: idmapped layers are currently not supported
	[Jan11 08:31] overlayfs: idmapped layers are currently not supported
	[Jan11 08:32] overlayfs: idmapped layers are currently not supported
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3d8f07a9089011370ce578f9e96c1fca727ce96da1e432ccd926d97b1ea3545e] <==
	{"level":"info","ts":"2026-01-11T09:03:50.229823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T09:03:50.229864Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-11T09:03:50.240393Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9f0758e1c58a86ed","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2026-01-11T09:03:50.240723Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:03:50.240754Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:03:50.240766Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:03:50.247632Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-11T09:03:50.247836Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T09:03:50.24786Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T09:03:50.24795Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:03:50.247958Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:03:50.906182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:03:50.906235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:03:50.906264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:03:50.906278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:03:50.906285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:03:50.906295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:03:50.906304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:03:50.916406Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-931581 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:03:50.916524Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:03:50.916581Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:03:50.917537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:03:50.920415Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:03:50.92048Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:03:50.921731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 09:04:46 up  3:47,  0 user,  load average: 1.23, 1.40, 1.86
	Linux old-k8s-version-931581 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a849cb62cfb1018959839831eb215de72cee5a888a77c6d5bd24e8f28010ef7] <==
	I0111 09:03:55.841344       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:03:55.841709       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:03:55.841847       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:03:55.841859       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:03:55.841871       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:03:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:03:56.051187       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:03:56.051284       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:03:56.051321       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:03:56.052226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0111 09:04:26.052030       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0111 09:04:26.052038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0111 09:04:26.052127       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0111 09:04:26.052218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0111 09:04:27.552072       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:04:27.552105       1 metrics.go:72] Registering metrics
	I0111 09:04:27.552182       1 controller.go:711] "Syncing nftables rules"
	I0111 09:04:36.057378       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:04:36.057445       1 main.go:301] handling current node
	I0111 09:04:46.057680       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:04:46.057719       1 main.go:301] handling current node
	
	
	==> kube-apiserver [be3cae0859a767cb1d810c075beaa74a4697bb90f12bf159bea72e3e87da79a6] <==
	I0111 09:03:54.169489       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0111 09:03:54.409205       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:03:54.428547       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 09:03:54.450932       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0111 09:03:54.450961       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0111 09:03:54.452042       1 shared_informer.go:318] Caches are synced for configmaps
	I0111 09:03:54.452154       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0111 09:03:54.452192       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0111 09:03:54.453781       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0111 09:03:54.470392       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0111 09:03:54.481929       1 aggregator.go:166] initial CRD sync complete...
	I0111 09:03:54.482023       1 autoregister_controller.go:141] Starting autoregister controller
	I0111 09:03:54.482054       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 09:03:54.482085       1 cache.go:39] Caches are synced for autoregister controller
	I0111 09:03:55.036261       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0111 09:03:56.396651       1 controller.go:624] quota admission added evaluator for: namespaces
	I0111 09:03:56.449896       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0111 09:03:56.475437       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:03:56.484811       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:03:56.494931       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0111 09:03:56.556079       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.163.67"}
	I0111 09:03:56.574033       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.45.113"}
	I0111 09:04:07.203367       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0111 09:04:07.270948       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:04:07.273651       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [da8138b59df8207b192f5696b4f20a5ebb599324657398e62b0076ccd122e19f] <==
	I0111 09:04:07.282297       1 shared_informer.go:318] Caches are synced for resource quota
	I0111 09:04:07.326249       1 shared_informer.go:318] Caches are synced for attach detach
	I0111 09:04:07.336499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.013266ms"
	I0111 09:04:07.338703       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-cnrhh"
	I0111 09:04:07.340508       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4xxq2"
	I0111 09:04:07.340620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.698451ms"
	I0111 09:04:07.356797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="91.398441ms"
	I0111 09:04:07.365126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="137.96308ms"
	I0111 09:04:07.377850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.518173ms"
	I0111 09:04:07.377928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="38.064µs"
	I0111 09:04:07.388693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.245404ms"
	I0111 09:04:07.388812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.134µs"
	I0111 09:04:07.402014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="147.407µs"
	I0111 09:04:07.711231       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 09:04:07.734432       1 shared_informer.go:318] Caches are synced for garbage collector
	I0111 09:04:07.734462       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0111 09:04:13.279741       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.444987ms"
	I0111 09:04:13.279845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.412µs"
	I0111 09:04:17.277255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.354µs"
	I0111 09:04:18.278103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.752µs"
	I0111 09:04:19.278573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.788µs"
	I0111 09:04:26.965367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.842623ms"
	I0111 09:04:26.966096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="150.812µs"
	I0111 09:04:28.306396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.313µs"
	I0111 09:04:37.671294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.22µs"
	
	
	==> kube-proxy [1d23c38218c09008ed0624126143b92ef4ae15746f4fb4fec5a67590f7b14aaf] <==
	I0111 09:03:55.706440       1 server_others.go:69] "Using iptables proxy"
	I0111 09:03:55.746979       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0111 09:03:55.807391       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:03:55.809177       1 server_others.go:152] "Using iptables Proxier"
	I0111 09:03:55.809210       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0111 09:03:55.809222       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0111 09:03:55.809244       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0111 09:03:55.809432       1 server.go:846] "Version info" version="v1.28.0"
	I0111 09:03:55.809441       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:03:55.830861       1 config.go:188] "Starting service config controller"
	I0111 09:03:55.830885       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0111 09:03:55.830915       1 config.go:97] "Starting endpoint slice config controller"
	I0111 09:03:55.830919       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0111 09:03:55.831315       1 config.go:315] "Starting node config controller"
	I0111 09:03:55.831322       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0111 09:03:55.938232       1 shared_informer.go:318] Caches are synced for node config
	I0111 09:03:55.940560       1 shared_informer.go:318] Caches are synced for service config
	I0111 09:03:55.940594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8df31809024e57fa523fb773427e68d11f877dc356c992cd0201a1b33573775d] <==
	I0111 09:03:53.760721       1 serving.go:348] Generated self-signed cert in-memory
	I0111 09:03:54.547886       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0111 09:03:54.554238       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:03:54.568499       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0111 09:03:54.569141       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0111 09:03:54.569171       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:03:54.569378       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0111 09:03:54.569197       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0111 09:03:54.569459       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0111 09:03:54.569211       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0111 09:03:54.569877       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0111 09:03:54.669551       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0111 09:03:54.669684       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0111 09:03:54.670730       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.356948     792 topology_manager.go:215] "Topology Admit Handler" podUID="ab919fee-d1a7-4612-9a7b-adf934b0d7c4" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-cnrhh"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.464131     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab919fee-d1a7-4612-9a7b-adf934b0d7c4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-cnrhh\" (UID: \"ab919fee-d1a7-4612-9a7b-adf934b0d7c4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cnrhh"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.464194     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvftm\" (UniqueName: \"kubernetes.io/projected/ab919fee-d1a7-4612-9a7b-adf934b0d7c4-kube-api-access-mvftm\") pod \"kubernetes-dashboard-8694d4445c-cnrhh\" (UID: \"ab919fee-d1a7-4612-9a7b-adf934b0d7c4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cnrhh"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.464223     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9fa86067-a09c-407a-a141-9dc159038379-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4xxq2\" (UID: \"9fa86067-a09c-407a-a141-9dc159038379\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: I0111 09:04:07.464248     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrw65\" (UniqueName: \"kubernetes.io/projected/9fa86067-a09c-407a-a141-9dc159038379-kube-api-access-wrw65\") pod \"dashboard-metrics-scraper-5f989dc9cf-4xxq2\" (UID: \"9fa86067-a09c-407a-a141-9dc159038379\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2"
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: W0111 09:04:07.697133     792 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/crio-bffb8bc3abe740da2cbeb011c534b6039c33b4a88a06424abb91a5fda150c89e WatchSource:0}: Error finding container bffb8bc3abe740da2cbeb011c534b6039c33b4a88a06424abb91a5fda150c89e: Status 404 returned error can't find the container with id bffb8bc3abe740da2cbeb011c534b6039c33b4a88a06424abb91a5fda150c89e
	Jan 11 09:04:07 old-k8s-version-931581 kubelet[792]: W0111 09:04:07.704474     792 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/93b661cce923865660b3c0dd333835fc2bdb49354829b762b5a11d02cb01e88b/crio-069e662cb13297aed2df5077ddeb1b0ded5aae90a25dd7aef31464b758937a1f WatchSource:0}: Error finding container 069e662cb13297aed2df5077ddeb1b0ded5aae90a25dd7aef31464b758937a1f: Status 404 returned error can't find the container with id 069e662cb13297aed2df5077ddeb1b0ded5aae90a25dd7aef31464b758937a1f
	Jan 11 09:04:13 old-k8s-version-931581 kubelet[792]: I0111 09:04:13.264744     792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cnrhh" podStartSLOduration=1.728372427 podCreationTimestamp="2026-01-11 09:04:07 +0000 UTC" firstStartedPulling="2026-01-11 09:04:07.701363639 +0000 UTC m=+18.794396632" lastFinishedPulling="2026-01-11 09:04:12.237662068 +0000 UTC m=+23.330695062" observedRunningTime="2026-01-11 09:04:13.264443326 +0000 UTC m=+24.357476328" watchObservedRunningTime="2026-01-11 09:04:13.264670857 +0000 UTC m=+24.357703851"
	Jan 11 09:04:17 old-k8s-version-931581 kubelet[792]: I0111 09:04:17.253843     792 scope.go:117] "RemoveContainer" containerID="86efdb5e76da3e8c51e3525f636ebcf05ef7aa78015d25386f322dbc8c01f3e6"
	Jan 11 09:04:18 old-k8s-version-931581 kubelet[792]: I0111 09:04:18.258418     792 scope.go:117] "RemoveContainer" containerID="86efdb5e76da3e8c51e3525f636ebcf05ef7aa78015d25386f322dbc8c01f3e6"
	Jan 11 09:04:18 old-k8s-version-931581 kubelet[792]: I0111 09:04:18.259014     792 scope.go:117] "RemoveContainer" containerID="09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e"
	Jan 11 09:04:18 old-k8s-version-931581 kubelet[792]: E0111 09:04:18.259688     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4xxq2_kubernetes-dashboard(9fa86067-a09c-407a-a141-9dc159038379)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2" podUID="9fa86067-a09c-407a-a141-9dc159038379"
	Jan 11 09:04:19 old-k8s-version-931581 kubelet[792]: I0111 09:04:19.262236     792 scope.go:117] "RemoveContainer" containerID="09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e"
	Jan 11 09:04:19 old-k8s-version-931581 kubelet[792]: E0111 09:04:19.262995     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4xxq2_kubernetes-dashboard(9fa86067-a09c-407a-a141-9dc159038379)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2" podUID="9fa86067-a09c-407a-a141-9dc159038379"
	Jan 11 09:04:26 old-k8s-version-931581 kubelet[792]: I0111 09:04:26.277786     792 scope.go:117] "RemoveContainer" containerID="73d9a283074adb9ebdd703527aa0eca8069e6e36375607411f549fdc67d6fa8e"
	Jan 11 09:04:27 old-k8s-version-931581 kubelet[792]: I0111 09:04:27.653081     792 scope.go:117] "RemoveContainer" containerID="09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e"
	Jan 11 09:04:28 old-k8s-version-931581 kubelet[792]: I0111 09:04:28.287130     792 scope.go:117] "RemoveContainer" containerID="09aedefb00399dfb669a4be9dda28c75ab812151041c171412b6274e23fb5e8e"
	Jan 11 09:04:28 old-k8s-version-931581 kubelet[792]: I0111 09:04:28.287591     792 scope.go:117] "RemoveContainer" containerID="199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576"
	Jan 11 09:04:28 old-k8s-version-931581 kubelet[792]: E0111 09:04:28.288239     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4xxq2_kubernetes-dashboard(9fa86067-a09c-407a-a141-9dc159038379)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2" podUID="9fa86067-a09c-407a-a141-9dc159038379"
	Jan 11 09:04:37 old-k8s-version-931581 kubelet[792]: I0111 09:04:37.652885     792 scope.go:117] "RemoveContainer" containerID="199afbf4b56c27ef445710da68eac5fac53c99ca375866a88ba9926641117576"
	Jan 11 09:04:37 old-k8s-version-931581 kubelet[792]: E0111 09:04:37.653246     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4xxq2_kubernetes-dashboard(9fa86067-a09c-407a-a141-9dc159038379)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4xxq2" podUID="9fa86067-a09c-407a-a141-9dc159038379"
	Jan 11 09:04:40 old-k8s-version-931581 kubelet[792]: I0111 09:04:40.873530     792 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 11 09:04:40 old-k8s-version-931581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:04:40 old-k8s-version-931581 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:04:40 old-k8s-version-931581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bd56401a5c752eae1d3614979ba146f00260bc3c39f492b048ec64ee36838966] <==
	2026/01/11 09:04:12 Using namespace: kubernetes-dashboard
	2026/01/11 09:04:12 Using in-cluster config to connect to apiserver
	2026/01/11 09:04:12 Using secret token for csrf signing
	2026/01/11 09:04:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 09:04:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 09:04:12 Successful initial request to the apiserver, version: v1.28.0
	2026/01/11 09:04:12 Generating JWE encryption key
	2026/01/11 09:04:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 09:04:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 09:04:13 Initializing JWE encryption key from synchronized object
	2026/01/11 09:04:13 Creating in-cluster Sidecar client
	2026/01/11 09:04:13 Serving insecurely on HTTP port: 9090
	2026/01/11 09:04:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:04:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:04:12 Starting overwatch
	
	
	==> storage-provisioner [73d9a283074adb9ebdd703527aa0eca8069e6e36375607411f549fdc67d6fa8e] <==
	I0111 09:03:55.794274       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 09:04:25.798598       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e2c5f1a589123d27ad903daf6aa3bac49856c181b9976f8336c6af69771590d4] <==
	I0111 09:04:26.325266       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:04:26.338998       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:04:26.339055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0111 09:04:43.738895       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:04:43.740210       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a0480bd-74ad-46e7-a509-867a9d06bbdb", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-931581_292d6665-8ad3-4089-a831-198cca10d7f7 became leader
	I0111 09:04:43.740407       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-931581_292d6665-8ad3-4089-a831-198cca10d7f7!
	I0111 09:04:43.841024       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-931581_292d6665-8ad3-4089-a831-198cca10d7f7!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-931581 -n old-k8s-version-931581
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-931581 -n old-k8s-version-931581: exit status 2 (368.191726ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-931581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.563908ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:05:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-236664 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-236664 describe deploy/metrics-server -n kube-system: exit status 1 (78.722985ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-236664 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-236664
helpers_test.go:244: (dbg) docker inspect no-preload-236664:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7",
	        "Created": "2026-01-11T09:04:51.004254013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 773435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:04:51.06248341Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/hosts",
	        "LogPath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7-json.log",
	        "Name": "/no-preload-236664",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-236664:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-236664",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7",
	                "LowerDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-236664",
	                "Source": "/var/lib/docker/volumes/no-preload-236664/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-236664",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-236664",
	                "name.minikube.sigs.k8s.io": "no-preload-236664",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6ce85e65bfc635c60fc72a0f7ab29b0d01b86e3a7dde3116791333128f457a19",
	            "SandboxKey": "/var/run/docker/netns/6ce85e65bfc6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-236664": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:b8:60:5f:16:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d2de3def74a111cc7c6606a54a81f8ccf25a54c9637f0b4509f31f3903e872a",
	                    "EndpointID": "a6d5e98f86ef1e72a709bdf1dd961ba00b091141b0f64f7653c3658a0f172823",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-236664",
	                        "ad25e0395513"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-236664 -n no-preload-236664
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-236664 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-236664 logs -n 25: (1.196903231s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-293572 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ ssh     │ -p cilium-293572 sudo crio config                                                                                                                                                                                                             │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │                     │
	│ delete  │ -p cilium-293572                                                                                                                                                                                                                              │ cilium-293572             │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:55 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:56 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ delete  │ -p cert-expiration-448134                                                                                                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ start   │ -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-630015 │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │                     │
	│ delete  │ -p force-systemd-env-472282                                                                                                                                                                                                                   │ force-systemd-env-472282  │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:01 UTC │
	│ start   │ -p cert-options-459267 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ cert-options-459267 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ -p cert-options-459267 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ delete  │ -p cert-options-459267                                                                                                                                                                                                                        │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │                     │
	│ stop    │ -p old-k8s-version-931581 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-931581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:04 UTC │
	│ image   │ old-k8s-version-931581 image list --format=json                                                                                                                                                                                               │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ pause   │ -p old-k8s-version-931581 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │                     │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:04:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:04:50.012599  773124 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:04:50.012751  773124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:04:50.012760  773124 out.go:374] Setting ErrFile to fd 2...
	I0111 09:04:50.012766  773124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:04:50.013066  773124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:04:50.013545  773124 out.go:368] Setting JSON to false
	I0111 09:04:50.014557  773124 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13640,"bootTime":1768108650,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:04:50.014656  773124 start.go:143] virtualization:  
	I0111 09:04:50.019243  773124 out.go:179] * [no-preload-236664] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:04:50.022872  773124 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:04:50.022914  773124 notify.go:221] Checking for updates...
	I0111 09:04:50.029781  773124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:04:50.033039  773124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:04:50.036176  773124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:04:50.039380  773124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:04:50.042519  773124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:04:50.046437  773124 config.go:182] Loaded profile config "force-systemd-flag-630015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:04:50.046630  773124 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:04:50.084799  773124 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:04:50.084945  773124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:04:50.142202  773124 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:04:50.131849383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:04:50.142307  773124 docker.go:319] overlay module found
	I0111 09:04:50.145396  773124 out.go:179] * Using the docker driver based on user configuration
	I0111 09:04:50.148246  773124 start.go:309] selected driver: docker
	I0111 09:04:50.148269  773124 start.go:928] validating driver "docker" against <nil>
	I0111 09:04:50.148298  773124 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:04:50.149036  773124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:04:50.205822  773124 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:04:50.196816598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:04:50.205972  773124 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 09:04:50.206275  773124 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:04:50.209192  773124 out.go:179] * Using Docker driver with root privileges
	I0111 09:04:50.212108  773124 cni.go:84] Creating CNI manager for ""
	I0111 09:04:50.212179  773124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:04:50.212194  773124 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 09:04:50.212271  773124 start.go:353] cluster config:
	{Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:04:50.217196  773124 out.go:179] * Starting "no-preload-236664" primary control-plane node in "no-preload-236664" cluster
	I0111 09:04:50.220061  773124 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:04:50.223152  773124 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:04:50.226109  773124 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:04:50.226214  773124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:04:50.226353  773124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/config.json ...
	I0111 09:04:50.226384  773124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/config.json: {Name:mkba1a0b5ba3000312cbe33648354fb95bfc9db8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:04:50.226808  773124 cache.go:107] acquiring lock: {Name:mke7592fddd2045b523fca2428ddc0663b88772c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:04:50.226875  773124 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0111 09:04:50.226890  773124 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.901µs
	I0111 09:04:50.226904  773124 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0111 09:04:50.226914  773124 cache.go:107] acquiring lock: {Name:mka93ed5255d21ece6b85aca20055b51e1583edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:04:50.226950  773124 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0111 09:04:50.226956  773124 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 42.52µs
	I0111 09:04:50.226961  773124 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0111 09:04:50.226970  773124 cache.go:107] acquiring lock: {Name:mk1920546e4d844033ab047e82c06a7f1485d45d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:04:50.227003  773124 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0111 09:04:50.227008  773124 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 38.918µs
	I0111 09:04:50.227014  773124 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0111 09:04:50.227024  773124 cache.go:107] acquiring lock: {Name:mk17b9d3288a8c36f55558137618c53fb114bff4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:04:50.227055  773124 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0111 09:04:50.227060  773124 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 36.686µs
	I0111 09:04:50.227066  773124 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0111 09:04:50.227074  773124 cache.go:107] acquiring lock: {Name:mk3545fa2d0a8ca45b860e43eaaa700d6213211e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:04:50.227101  773124 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0111 09:04:50.227107  773124 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 34.363µs
	I0111 09:04:50.227113  773124 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0111 09:04:50.227121  773124 cache.go:107] acquiring lock: {Name:mk3e1f7f5f36f7e3b242ff5d86252009cd03b858 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:04:50.227147  773124 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0111 09:04:50.227151  773124 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.352µs
	I0111 09:04:50.227156  773124 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0111 09:04:50.227164  773124 cache.go:107] acquiring lock: {Name:mkbecbc2e8fbcc821087042d95b724409aa47662 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:04:50.227192  773124 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I0111 09:04:50.227196  773124 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 32.977µs
	I0111 09:04:50.227201  773124 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0111 09:04:50.227210  773124 cache.go:107] acquiring lock: {Name:mke213d3c5eada4cb2452801d6ba8056e0c2260a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:04:50.227235  773124 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0111 09:04:50.227240  773124 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.402µs
	I0111 09:04:50.227246  773124 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0111 09:04:50.227251  773124 cache.go:87] Successfully saved all images to host disk.
	I0111 09:04:50.247003  773124 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:04:50.247031  773124 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:04:50.247052  773124 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:04:50.247084  773124 start.go:360] acquireMachinesLock for no-preload-236664: {Name:mk79de85616a4c1001da7e12d7ef8a42711def92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:04:50.247212  773124 start.go:364] duration metric: took 107.538µs to acquireMachinesLock for "no-preload-236664"
	I0111 09:04:50.247246  773124 start.go:93] Provisioning new machine with config: &{Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:04:50.247325  773124 start.go:125] createHost starting for "" (driver="docker")
	I0111 09:04:50.252850  773124 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 09:04:50.253121  773124 start.go:159] libmachine.API.Create for "no-preload-236664" (driver="docker")
	I0111 09:04:50.253169  773124 client.go:173] LocalClient.Create starting
	I0111 09:04:50.253273  773124 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 09:04:50.253314  773124 main.go:144] libmachine: Decoding PEM data...
	I0111 09:04:50.253334  773124 main.go:144] libmachine: Parsing certificate...
	I0111 09:04:50.253387  773124 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 09:04:50.253410  773124 main.go:144] libmachine: Decoding PEM data...
	I0111 09:04:50.253429  773124 main.go:144] libmachine: Parsing certificate...
	I0111 09:04:50.253814  773124 cli_runner.go:164] Run: docker network inspect no-preload-236664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 09:04:50.270815  773124 cli_runner.go:211] docker network inspect no-preload-236664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 09:04:50.270934  773124 network_create.go:284] running [docker network inspect no-preload-236664] to gather additional debugging logs...
	I0111 09:04:50.270952  773124 cli_runner.go:164] Run: docker network inspect no-preload-236664
	W0111 09:04:50.287951  773124 cli_runner.go:211] docker network inspect no-preload-236664 returned with exit code 1
	I0111 09:04:50.287987  773124 network_create.go:287] error running [docker network inspect no-preload-236664]: docker network inspect no-preload-236664: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-236664 not found
	I0111 09:04:50.288002  773124 network_create.go:289] output of [docker network inspect no-preload-236664]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-236664 not found
	
	** /stderr **
	I0111 09:04:50.288102  773124 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:04:50.305075  773124 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
	I0111 09:04:50.305420  773124 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461c1a9d970d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:7e:25:fe:d0:0d} reservation:<nil>}
	I0111 09:04:50.305797  773124 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38e10816f85 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:42:af:ae:32:ae} reservation:<nil>}
	I0111 09:04:50.306030  773124 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6ac2cdd04afb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:0e:43:8e:04:e3} reservation:<nil>}
	I0111 09:04:50.306592  773124 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4a5a0}
	I0111 09:04:50.306618  773124 network_create.go:124] attempt to create docker network no-preload-236664 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 09:04:50.306686  773124 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-236664 no-preload-236664
	I0111 09:04:50.364220  773124 network_create.go:108] docker network no-preload-236664 192.168.85.0/24 created
	I0111 09:04:50.364257  773124 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-236664" container
	I0111 09:04:50.364331  773124 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 09:04:50.380736  773124 cli_runner.go:164] Run: docker volume create no-preload-236664 --label name.minikube.sigs.k8s.io=no-preload-236664 --label created_by.minikube.sigs.k8s.io=true
	I0111 09:04:50.399529  773124 oci.go:103] Successfully created a docker volume no-preload-236664
	I0111 09:04:50.399641  773124 cli_runner.go:164] Run: docker run --rm --name no-preload-236664-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-236664 --entrypoint /usr/bin/test -v no-preload-236664:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 09:04:50.927896  773124 oci.go:107] Successfully prepared a docker volume no-preload-236664
	I0111 09:04:50.927975  773124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W0111 09:04:50.928108  773124 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 09:04:50.928223  773124 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 09:04:50.987586  773124 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-236664 --name no-preload-236664 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-236664 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-236664 --network no-preload-236664 --ip 192.168.85.2 --volume no-preload-236664:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 09:04:51.287953  773124 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Running}}
	I0111 09:04:51.310600  773124 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:04:51.336502  773124 cli_runner.go:164] Run: docker exec no-preload-236664 stat /var/lib/dpkg/alternatives/iptables
	I0111 09:04:51.391002  773124 oci.go:144] the created container "no-preload-236664" has a running status.
	I0111 09:04:51.391029  773124 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa...
	I0111 09:04:52.574450  773124 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 09:04:52.595298  773124 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:04:52.612437  773124 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 09:04:52.612460  773124 kic_runner.go:114] Args: [docker exec --privileged no-preload-236664 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 09:04:52.655215  773124 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:04:52.673839  773124 machine.go:94] provisionDockerMachine start ...
	I0111 09:04:52.673935  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:04:52.693015  773124 main.go:144] libmachine: Using SSH client type: native
	I0111 09:04:52.693370  773124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33793 <nil> <nil>}
	I0111 09:04:52.693380  773124 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:04:52.694081  773124 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0111 09:04:55.845906  773124 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-236664
	
	I0111 09:04:55.845933  773124 ubuntu.go:182] provisioning hostname "no-preload-236664"
	I0111 09:04:55.846000  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:04:55.863862  773124 main.go:144] libmachine: Using SSH client type: native
	I0111 09:04:55.864191  773124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33793 <nil> <nil>}
	I0111 09:04:55.864212  773124 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-236664 && echo "no-preload-236664" | sudo tee /etc/hostname
	I0111 09:04:56.028827  773124 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-236664
	
	I0111 09:04:56.028916  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:04:56.048353  773124 main.go:144] libmachine: Using SSH client type: native
	I0111 09:04:56.048671  773124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33793 <nil> <nil>}
	I0111 09:04:56.048697  773124 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-236664' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-236664/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-236664' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:04:56.205000  773124 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:04:56.205030  773124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:04:56.205062  773124 ubuntu.go:190] setting up certificates
	I0111 09:04:56.205076  773124 provision.go:84] configureAuth start
	I0111 09:04:56.205148  773124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-236664
	I0111 09:04:56.226519  773124 provision.go:143] copyHostCerts
	I0111 09:04:56.226603  773124 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:04:56.226616  773124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:04:56.226701  773124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:04:56.226814  773124 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:04:56.226829  773124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:04:56.226860  773124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:04:56.226918  773124 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:04:56.226928  773124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:04:56.226952  773124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:04:56.227002  773124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.no-preload-236664 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-236664]
	I0111 09:04:56.283350  773124 provision.go:177] copyRemoteCerts
	I0111 09:04:56.283417  773124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:04:56.283462  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:04:56.307616  773124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33793 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:04:56.422603  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:04:56.442536  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 09:04:56.461091  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 09:04:56.479862  773124 provision.go:87] duration metric: took 274.768283ms to configureAuth
	I0111 09:04:56.479891  773124 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:04:56.480080  773124 config.go:182] Loaded profile config "no-preload-236664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:04:56.480185  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:04:56.500971  773124 main.go:144] libmachine: Using SSH client type: native
	I0111 09:04:56.501291  773124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33793 <nil> <nil>}
	I0111 09:04:56.501319  773124 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:04:56.833316  773124 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:04:56.833342  773124 machine.go:97] duration metric: took 4.159480952s to provisionDockerMachine
	I0111 09:04:56.833353  773124 client.go:176] duration metric: took 6.580172698s to LocalClient.Create
	I0111 09:04:56.833366  773124 start.go:167] duration metric: took 6.58024844s to libmachine.API.Create "no-preload-236664"
	I0111 09:04:56.833374  773124 start.go:293] postStartSetup for "no-preload-236664" (driver="docker")
	I0111 09:04:56.833384  773124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:04:56.833468  773124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:04:56.833514  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:04:56.852191  773124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33793 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:04:56.954748  773124 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:04:56.958505  773124 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:04:56.958542  773124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:04:56.958571  773124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:04:56.958651  773124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:04:56.958752  773124 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:04:56.958866  773124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:04:56.966830  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:04:56.985498  773124 start.go:296] duration metric: took 152.110879ms for postStartSetup
	I0111 09:04:56.985859  773124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-236664
	I0111 09:04:57.012465  773124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/config.json ...
	I0111 09:04:57.012761  773124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:04:57.012804  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:04:57.030757  773124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33793 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:04:57.131629  773124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:04:57.136577  773124 start.go:128] duration metric: took 6.88923858s to createHost
	I0111 09:04:57.136605  773124 start.go:83] releasing machines lock for "no-preload-236664", held for 6.88938035s
	I0111 09:04:57.136682  773124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-236664
	I0111 09:04:57.153635  773124 ssh_runner.go:195] Run: cat /version.json
	I0111 09:04:57.153690  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:04:57.153640  773124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:04:57.153795  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:04:57.171038  773124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33793 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:04:57.192212  773124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33793 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:04:57.374808  773124 ssh_runner.go:195] Run: systemctl --version
	I0111 09:04:57.381977  773124 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:04:57.416095  773124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:04:57.420477  773124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:04:57.420583  773124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:04:57.453966  773124 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 09:04:57.454010  773124 start.go:496] detecting cgroup driver to use...
	I0111 09:04:57.454080  773124 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:04:57.454185  773124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:04:57.479945  773124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:04:57.493988  773124 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:04:57.494103  773124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:04:57.513826  773124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:04:57.533617  773124 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:04:57.657289  773124 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:04:57.782823  773124 docker.go:234] disabling docker service ...
	I0111 09:04:57.782913  773124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:04:57.804376  773124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:04:57.817926  773124 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:04:57.933301  773124 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:04:58.071064  773124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:04:58.085285  773124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:04:58.101028  773124 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 09:04:58.101167  773124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:04:58.111434  773124 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:04:58.111560  773124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:04:58.120980  773124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:04:58.130301  773124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:04:58.139741  773124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:04:58.148280  773124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:04:58.157267  773124 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:04:58.171247  773124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:04:58.183478  773124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:04:58.192743  773124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:04:58.201522  773124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:04:58.319625  773124 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:04:58.480562  773124 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:04:58.480636  773124 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:04:58.484652  773124 start.go:574] Will wait 60s for crictl version
	I0111 09:04:58.484722  773124 ssh_runner.go:195] Run: which crictl
	I0111 09:04:58.488438  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:04:58.513405  773124 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:04:58.513507  773124 ssh_runner.go:195] Run: crio --version
	I0111 09:04:58.541410  773124 ssh_runner.go:195] Run: crio --version
	I0111 09:04:58.574076  773124 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 09:04:58.576898  773124 cli_runner.go:164] Run: docker network inspect no-preload-236664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:04:58.593400  773124 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:04:58.597335  773124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:04:58.607120  773124 kubeadm.go:884] updating cluster {Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:04:58.607232  773124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:04:58.607282  773124 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:04:58.637955  773124 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I0111 09:04:58.637981  773124 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0111 09:04:58.638017  773124 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:04:58.638195  773124 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I0111 09:04:58.638364  773124 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I0111 09:04:58.638459  773124 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I0111 09:04:58.638552  773124 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I0111 09:04:58.638724  773124 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I0111 09:04:58.638814  773124 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I0111 09:04:58.638929  773124 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I0111 09:04:58.641479  773124 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I0111 09:04:58.641962  773124 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I0111 09:04:58.642141  773124 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I0111 09:04:58.642291  773124 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I0111 09:04:58.642420  773124 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I0111 09:04:58.642554  773124 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I0111 09:04:58.642683  773124 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I0111 09:04:58.642770  773124 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:04:59.039803  773124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I0111 09:04:59.043629  773124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I0111 09:04:59.046936  773124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I0111 09:04:59.053102  773124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I0111 09:04:59.056283  773124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I0111 09:04:59.056795  773124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I0111 09:04:59.075792  773124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I0111 09:04:59.139076  773124 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5" in container runtime
	I0111 09:04:59.139203  773124 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I0111 09:04:59.139314  773124 ssh_runner.go:195] Run: which crictl
	I0111 09:04:59.152049  773124 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I0111 09:04:59.152146  773124 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I0111 09:04:59.152253  773124 ssh_runner.go:195] Run: which crictl
	I0111 09:04:59.204698  773124 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856" in container runtime
	I0111 09:04:59.204737  773124 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I0111 09:04:59.204797  773124 ssh_runner.go:195] Run: which crictl
	I0111 09:04:59.204887  773124 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I0111 09:04:59.204905  773124 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I0111 09:04:59.204928  773124 ssh_runner.go:195] Run: which crictl
	I0111 09:04:59.272169  773124 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I0111 09:04:59.272225  773124 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I0111 09:04:59.272282  773124 ssh_runner.go:195] Run: which crictl
	I0111 09:04:59.272281  773124 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f" in container runtime
	I0111 09:04:59.272372  773124 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I0111 09:04:59.272401  773124 ssh_runner.go:195] Run: which crictl
	I0111 09:04:59.272490  773124 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0" in container runtime
	I0111 09:04:59.272548  773124 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I0111 09:04:59.272574  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I0111 09:04:59.272630  773124 ssh_runner.go:195] Run: which crictl
	I0111 09:04:59.272681  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0111 09:04:59.272751  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I0111 09:04:59.272823  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I0111 09:04:59.348354  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I0111 09:04:59.348430  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I0111 09:04:59.348478  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I0111 09:04:59.348581  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I0111 09:04:59.348649  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I0111 09:04:59.348739  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0111 09:04:59.348804  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I0111 09:04:59.466299  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I0111 09:04:59.466375  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I0111 09:04:59.466468  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I0111 09:04:59.466535  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I0111 09:04:59.466483  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I0111 09:04:59.466613  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I0111 09:04:59.466672  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0111 09:04:59.585400  773124 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I0111 09:04:59.585502  773124 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0
	I0111 09:04:59.585602  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I0111 09:04:59.585680  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I0111 09:04:59.585737  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I0111 09:04:59.585809  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I0111 09:04:59.585869  773124 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I0111 09:04:59.585940  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I0111 09:04:59.585888  773124 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0
	I0111 09:04:59.586023  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I0111 09:04:59.586054  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0111 09:04:59.628779  773124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I0111 09:04:59.628890  773124 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0
	I0111 09:04:59.629000  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I0111 09:04:59.629084  773124 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I0111 09:04:59.629128  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I0111 09:04:59.628824  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (22434816 bytes)
	I0111 09:04:59.661569  773124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I0111 09:04:59.661609  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (24702976 bytes)
	I0111 09:04:59.661724  773124 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I0111 09:04:59.661764  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I0111 09:04:59.661920  773124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I0111 09:04:59.661942  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (15415808 bytes)
	I0111 09:04:59.662008  773124 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I0111 09:04:59.662104  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I0111 09:04:59.662176  773124 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0
	I0111 09:04:59.662250  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0111 09:04:59.740671  773124 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I0111 09:04:59.740716  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I0111 09:04:59.740792  773124 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I0111 09:04:59.740825  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (20682752 bytes)
	I0111 09:04:59.754331  773124 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I0111 09:04:59.754430  773124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W0111 09:04:59.900424  773124 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0111 09:04:59.900637  773124 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:05:00.211626  773124 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0111 09:05:00.212266  773124 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:05:00.212385  773124 ssh_runner.go:195] Run: which crictl
	I0111 09:05:00.224973  773124 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I0111 09:05:00.338271  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:05:00.588590  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:05:00.664534  773124 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I0111 09:05:00.665181  773124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I0111 09:05:00.700146  773124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:05:02.329502  773124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.664258012s)
	I0111 09:05:02.329527  773124 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I0111 09:05:02.329547  773124 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I0111 09:05:02.329547  773124 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.629365787s)
	I0111 09:05:02.329583  773124 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0111 09:05:02.329610  773124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I0111 09:05:02.329659  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0111 09:05:03.647505  773124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.317868923s)
	I0111 09:05:03.647529  773124 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I0111 09:05:03.647546  773124 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I0111 09:05:03.647597  773124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I0111 09:05:03.647670  773124 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.317999747s)
	I0111 09:05:03.647685  773124 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0111 09:05:03.647699  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0111 09:05:04.943624  773124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.296003885s)
	I0111 09:05:04.943656  773124 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I0111 09:05:04.943686  773124 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0111 09:05:04.943757  773124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0111 09:05:06.108305  773124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.164521719s)
	I0111 09:05:06.108334  773124 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I0111 09:05:06.108357  773124 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I0111 09:05:06.108413  773124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I0111 09:05:07.851700  773124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.743258208s)
	I0111 09:05:07.851732  773124 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I0111 09:05:07.851759  773124 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0111 09:05:07.851810  773124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0111 09:05:09.269064  773124 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.41722784s)
	I0111 09:05:09.269092  773124 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I0111 09:05:09.269111  773124 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0111 09:05:09.269157  773124 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0111 09:05:09.848670  773124 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0111 09:05:09.848711  773124 cache_images.go:125] Successfully loaded all cached images
	I0111 09:05:09.848718  773124 cache_images.go:94] duration metric: took 11.210722699s to LoadCachedImages
	I0111 09:05:09.848731  773124 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 09:05:09.848830  773124 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-236664 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:05:09.848918  773124 ssh_runner.go:195] Run: crio config
	I0111 09:05:09.908335  773124 cni.go:84] Creating CNI manager for ""
	I0111 09:05:09.908361  773124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:05:09.908411  773124 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:05:09.908442  773124 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-236664 NodeName:no-preload-236664 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:05:09.908587  773124 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-236664"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:05:09.908672  773124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 09:05:09.916961  773124 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I0111 09:05:09.917040  773124 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I0111 09:05:09.925114  773124 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
	I0111 09:05:09.925223  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I0111 09:05:09.925319  773124 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet.sha256
	I0111 09:05:09.925351  773124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:05:09.925453  773124 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm.sha256
	I0111 09:05:09.925517  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I0111 09:05:09.930852  773124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I0111 09:05:09.930894  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/linux/arm64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (68354232 bytes)
	I0111 09:05:09.938815  773124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I0111 09:05:09.938851  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/linux/arm64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (55247032 bytes)
	I0111 09:05:09.949188  773124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I0111 09:05:09.971633  773124 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I0111 09:05:09.971678  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/cache/linux/arm64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (54329636 bytes)
	I0111 09:05:10.855346  773124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:05:10.864468  773124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0111 09:05:10.880406  773124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:05:10.899840  773124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0111 09:05:10.915347  773124 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:05:10.919400  773124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:05:10.929951  773124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:05:11.059168  773124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:05:11.078399  773124 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664 for IP: 192.168.85.2
	I0111 09:05:11.078425  773124 certs.go:195] generating shared ca certs ...
	I0111 09:05:11.078442  773124 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:05:11.078680  773124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:05:11.078754  773124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:05:11.078770  773124 certs.go:257] generating profile certs ...
	I0111 09:05:11.078850  773124 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.key
	I0111 09:05:11.078887  773124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt with IP's: []
	I0111 09:05:11.492241  773124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt ...
	I0111 09:05:11.492274  773124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: {Name:mka516f0b80df289babb150faabf2b254425cf8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:05:11.492521  773124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.key ...
	I0111 09:05:11.492537  773124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.key: {Name:mk594aafe3abbe8492971ea45ad91aa0190c6ff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:05:11.492638  773124 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key.689315f2
	I0111 09:05:11.492657  773124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.crt.689315f2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 09:05:11.796534  773124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.crt.689315f2 ...
	I0111 09:05:11.796566  773124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.crt.689315f2: {Name:mk2ab4047c3a9bb6d28ddd9a66be59ce332eac1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:05:11.796754  773124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key.689315f2 ...
	I0111 09:05:11.796769  773124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key.689315f2: {Name:mk968da6bd037763758f76ae11be85059ab99aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:05:11.796854  773124 certs.go:382] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.crt.689315f2 -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.crt
	I0111 09:05:11.796931  773124 certs.go:386] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key.689315f2 -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key
	I0111 09:05:11.796990  773124 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.key
	I0111 09:05:11.797010  773124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.crt with IP's: []
	I0111 09:05:12.004509  773124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.crt ...
	I0111 09:05:12.004546  773124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.crt: {Name:mkf402b5561fcd741db5e8f975690ba819d73b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:05:12.004761  773124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.key ...
	I0111 09:05:12.004779  773124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.key: {Name:mk90efc913652e12b57fc05a6d9958766302accc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:05:12.004964  773124 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:05:12.005017  773124 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:05:12.005032  773124 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:05:12.005058  773124 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:05:12.005090  773124 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:05:12.005123  773124 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:05:12.005173  773124 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:05:12.005786  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:05:12.050890  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:05:12.070309  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:05:12.089371  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:05:12.108257  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0111 09:05:12.127072  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 09:05:12.145345  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:05:12.164553  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 09:05:12.183390  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:05:12.201784  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:05:12.220812  773124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:05:12.239792  773124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:05:12.253343  773124 ssh_runner.go:195] Run: openssl version
	I0111 09:05:12.259927  773124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:05:12.267667  773124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:05:12.275374  773124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:05:12.279182  773124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:05:12.279282  773124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:05:12.320402  773124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:05:12.328194  773124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 09:05:12.336279  773124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:05:12.344386  773124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:05:12.353493  773124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:05:12.357279  773124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:05:12.357349  773124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:05:12.398357  773124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:05:12.406070  773124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/576907.pem /etc/ssl/certs/51391683.0
	I0111 09:05:12.414249  773124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:05:12.421786  773124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:05:12.429667  773124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:05:12.433859  773124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:05:12.433927  773124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:05:12.475638  773124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:05:12.483375  773124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5769072.pem /etc/ssl/certs/3ec20f2e.0
	I0111 09:05:12.491098  773124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:05:12.495076  773124 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 09:05:12.495133  773124 kubeadm.go:401] StartCluster: {Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:05:12.495219  773124 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:05:12.495277  773124 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:05:12.522900  773124 cri.go:96] found id: ""
	I0111 09:05:12.523019  773124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:05:12.531211  773124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 09:05:12.539381  773124 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 09:05:12.539452  773124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 09:05:12.547378  773124 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 09:05:12.547400  773124 kubeadm.go:158] found existing configuration files:
	
	I0111 09:05:12.547453  773124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 09:05:12.555188  773124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 09:05:12.555290  773124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 09:05:12.562827  773124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 09:05:12.571715  773124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 09:05:12.571801  773124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 09:05:12.579542  773124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 09:05:12.587657  773124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 09:05:12.587725  773124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 09:05:12.595811  773124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 09:05:12.603759  773124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 09:05:12.603855  773124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 09:05:12.611500  773124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 09:05:12.777663  773124 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:05:12.778094  773124 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:05:12.847035  773124 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:05:24.612497  773124 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 09:05:24.612558  773124 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 09:05:24.612646  773124 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 09:05:24.612705  773124 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 09:05:24.612745  773124 kubeadm.go:319] OS: Linux
	I0111 09:05:24.612794  773124 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 09:05:24.612856  773124 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 09:05:24.612907  773124 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 09:05:24.612957  773124 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 09:05:24.613009  773124 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 09:05:24.613062  773124 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 09:05:24.613111  773124 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 09:05:24.613162  773124 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 09:05:24.613210  773124 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 09:05:24.613286  773124 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 09:05:24.613385  773124 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 09:05:24.613478  773124 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 09:05:24.613546  773124 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 09:05:24.616488  773124 out.go:252]   - Generating certificates and keys ...
	I0111 09:05:24.616585  773124 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 09:05:24.616656  773124 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 09:05:24.616727  773124 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 09:05:24.616786  773124 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 09:05:24.616851  773124 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 09:05:24.616905  773124 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 09:05:24.616963  773124 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 09:05:24.617087  773124 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-236664] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:05:24.617143  773124 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 09:05:24.617265  773124 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-236664] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:05:24.617338  773124 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 09:05:24.617405  773124 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 09:05:24.617453  773124 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 09:05:24.617512  773124 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 09:05:24.617568  773124 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 09:05:24.617628  773124 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 09:05:24.617690  773124 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 09:05:24.617756  773124 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 09:05:24.617817  773124 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 09:05:24.617903  773124 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 09:05:24.617972  773124 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 09:05:24.621122  773124 out.go:252]   - Booting up control plane ...
	I0111 09:05:24.621254  773124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 09:05:24.621359  773124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 09:05:24.621434  773124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 09:05:24.621577  773124 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 09:05:24.621676  773124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 09:05:24.621784  773124 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 09:05:24.621881  773124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 09:05:24.621925  773124 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:05:24.622058  773124 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 09:05:24.622278  773124 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 09:05:24.622359  773124 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000909519s
	I0111 09:05:24.622457  773124 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 09:05:24.622546  773124 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0111 09:05:24.622641  773124 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 09:05:24.622736  773124 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 09:05:24.622823  773124 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.509381493s
	I0111 09:05:24.622893  773124 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.54702838s
	I0111 09:05:24.622964  773124 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501456371s
	I0111 09:05:24.623074  773124 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 09:05:24.623204  773124 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 09:05:24.623276  773124 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 09:05:24.623466  773124 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-236664 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 09:05:24.623527  773124 kubeadm.go:319] [bootstrap-token] Using token: i6qb4l.0gwcarj2t0u0gvsu
	I0111 09:05:24.626542  773124 out.go:252]   - Configuring RBAC rules ...
	I0111 09:05:24.626673  773124 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 09:05:24.626775  773124 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 09:05:24.626984  773124 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 09:05:24.627170  773124 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 09:05:24.627338  773124 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 09:05:24.627472  773124 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 09:05:24.627625  773124 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 09:05:24.627702  773124 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 09:05:24.627768  773124 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 09:05:24.627781  773124 kubeadm.go:319] 
	I0111 09:05:24.627853  773124 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 09:05:24.627877  773124 kubeadm.go:319] 
	I0111 09:05:24.627961  773124 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 09:05:24.627969  773124 kubeadm.go:319] 
	I0111 09:05:24.627995  773124 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 09:05:24.628058  773124 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 09:05:24.628134  773124 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 09:05:24.628163  773124 kubeadm.go:319] 
	I0111 09:05:24.628219  773124 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 09:05:24.628259  773124 kubeadm.go:319] 
	I0111 09:05:24.628340  773124 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 09:05:24.628348  773124 kubeadm.go:319] 
	I0111 09:05:24.628428  773124 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 09:05:24.628551  773124 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 09:05:24.628652  773124 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 09:05:24.628658  773124 kubeadm.go:319] 
	I0111 09:05:24.628788  773124 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 09:05:24.628902  773124 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 09:05:24.628923  773124 kubeadm.go:319] 
	I0111 09:05:24.629010  773124 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token i6qb4l.0gwcarj2t0u0gvsu \
	I0111 09:05:24.629117  773124 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb \
	I0111 09:05:24.629141  773124 kubeadm.go:319] 	--control-plane 
	I0111 09:05:24.629152  773124 kubeadm.go:319] 
	I0111 09:05:24.629254  773124 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 09:05:24.629296  773124 kubeadm.go:319] 
	I0111 09:05:24.629412  773124 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token i6qb4l.0gwcarj2t0u0gvsu \
	I0111 09:05:24.629569  773124 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb 
	I0111 09:05:24.629591  773124 cni.go:84] Creating CNI manager for ""
	I0111 09:05:24.629600  773124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:05:24.632688  773124 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 09:05:24.635655  773124 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 09:05:24.640035  773124 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 09:05:24.640055  773124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 09:05:24.653140  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 09:05:24.957503  773124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 09:05:24.957597  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:24.957674  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-236664 minikube.k8s.io/updated_at=2026_01_11T09_05_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=no-preload-236664 minikube.k8s.io/primary=true
	I0111 09:05:25.094201  773124 ops.go:34] apiserver oom_adj: -16
	I0111 09:05:25.094429  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:25.595037  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:26.095184  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:26.595525  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:27.095405  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:27.595034  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:28.095399  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:28.594967  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:29.094653  773124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:05:29.257779  773124 kubeadm.go:1114] duration metric: took 4.300241695s to wait for elevateKubeSystemPrivileges
	I0111 09:05:29.257810  773124 kubeadm.go:403] duration metric: took 16.76268333s to StartCluster
	I0111 09:05:29.257828  773124 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:05:29.257887  773124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:05:29.258559  773124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:05:29.258770  773124 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:05:29.258881  773124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 09:05:29.259124  773124 config.go:182] Loaded profile config "no-preload-236664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:05:29.259171  773124 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:05:29.259234  773124 addons.go:70] Setting storage-provisioner=true in profile "no-preload-236664"
	I0111 09:05:29.259248  773124 addons.go:239] Setting addon storage-provisioner=true in "no-preload-236664"
	I0111 09:05:29.259275  773124 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:05:29.259989  773124 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:05:29.260137  773124 addons.go:70] Setting default-storageclass=true in profile "no-preload-236664"
	I0111 09:05:29.260158  773124 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-236664"
	I0111 09:05:29.260400  773124 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:05:29.263150  773124 out.go:179] * Verifying Kubernetes components...
	I0111 09:05:29.267055  773124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:05:29.300062  773124 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:05:29.302262  773124 addons.go:239] Setting addon default-storageclass=true in "no-preload-236664"
	I0111 09:05:29.302299  773124 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:05:29.302734  773124 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:05:29.303031  773124 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:05:29.303044  773124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:05:29.303088  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:05:29.333545  773124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33793 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:05:29.339674  773124 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:05:29.339699  773124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:05:29.339776  773124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:05:29.368104  773124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33793 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:05:29.598409  773124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 09:05:29.630623  773124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:05:29.647985  773124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:05:29.818622  773124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:05:30.481983  773124 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0111 09:05:30.801216  773124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.17051172s)
	I0111 09:05:30.801264  773124 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.153210196s)
	I0111 09:05:30.801956  773124 node_ready.go:35] waiting up to 6m0s for node "no-preload-236664" to be "Ready" ...
	I0111 09:05:30.824116  773124 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0111 09:05:30.827114  773124 addons.go:530] duration metric: took 1.567936599s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 09:05:30.986090  773124 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-236664" context rescaled to 1 replicas
	W0111 09:05:32.804820  773124 node_ready.go:57] node "no-preload-236664" has "Ready":"False" status (will retry)
	W0111 09:05:34.805605  773124 node_ready.go:57] node "no-preload-236664" has "Ready":"False" status (will retry)
	W0111 09:05:37.304713  773124 node_ready.go:57] node "no-preload-236664" has "Ready":"False" status (will retry)
	W0111 09:05:39.306306  773124 node_ready.go:57] node "no-preload-236664" has "Ready":"False" status (will retry)
	W0111 09:05:41.805423  773124 node_ready.go:57] node "no-preload-236664" has "Ready":"False" status (will retry)
	I0111 09:05:43.304861  773124 node_ready.go:49] node "no-preload-236664" is "Ready"
	I0111 09:05:43.304894  773124 node_ready.go:38] duration metric: took 12.502911523s for node "no-preload-236664" to be "Ready" ...
	I0111 09:05:43.304908  773124 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:05:43.304969  773124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:05:43.317256  773124 api_server.go:72] duration metric: took 14.058449882s to wait for apiserver process to appear ...
	I0111 09:05:43.317282  773124 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:05:43.317301  773124 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:05:43.327643  773124 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 09:05:43.328963  773124 api_server.go:141] control plane version: v1.35.0
	I0111 09:05:43.328994  773124 api_server.go:131] duration metric: took 11.705201ms to wait for apiserver health ...
	I0111 09:05:43.329004  773124 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:05:43.332860  773124 system_pods.go:59] 8 kube-system pods found
	I0111 09:05:43.332929  773124 system_pods.go:61] "coredns-7d764666f9-klbbk" [80992683-bfe3-4e82-9b11-b7fbb5d78563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:05:43.332939  773124 system_pods.go:61] "etcd-no-preload-236664" [0f619fb0-29f6-48d4-aecb-6037e3eefea7] Running
	I0111 09:05:43.332946  773124 system_pods.go:61] "kindnet-qp4zr" [93ff9ed5-c418-43c6-9661-20274d61d8a0] Running
	I0111 09:05:43.332951  773124 system_pods.go:61] "kube-apiserver-no-preload-236664" [e14eb11c-fffc-4ceb-b273-64041b01342a] Running
	I0111 09:05:43.332956  773124 system_pods.go:61] "kube-controller-manager-no-preload-236664" [429a4174-5009-493d-b016-6cb0e5c4779c] Running
	I0111 09:05:43.332967  773124 system_pods.go:61] "kube-proxy-fzn6d" [ebbd59c7-c087-48ed-9d3a-aab1a6c47aab] Running
	I0111 09:05:43.332972  773124 system_pods.go:61] "kube-scheduler-no-preload-236664" [4e3b1490-bf36-4093-a691-c7b17ddd3761] Running
	I0111 09:05:43.332980  773124 system_pods.go:61] "storage-provisioner" [882fc5e2-1706-42f4-90e2-9b77dfefb288] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:05:43.332993  773124 system_pods.go:74] duration metric: took 3.981024ms to wait for pod list to return data ...
	I0111 09:05:43.333002  773124 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:05:43.347851  773124 default_sa.go:45] found service account: "default"
	I0111 09:05:43.347882  773124 default_sa.go:55] duration metric: took 14.873307ms for default service account to be created ...
	I0111 09:05:43.347894  773124 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:05:43.355975  773124 system_pods.go:86] 8 kube-system pods found
	I0111 09:05:43.356013  773124 system_pods.go:89] "coredns-7d764666f9-klbbk" [80992683-bfe3-4e82-9b11-b7fbb5d78563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:05:43.356019  773124 system_pods.go:89] "etcd-no-preload-236664" [0f619fb0-29f6-48d4-aecb-6037e3eefea7] Running
	I0111 09:05:43.356026  773124 system_pods.go:89] "kindnet-qp4zr" [93ff9ed5-c418-43c6-9661-20274d61d8a0] Running
	I0111 09:05:43.356031  773124 system_pods.go:89] "kube-apiserver-no-preload-236664" [e14eb11c-fffc-4ceb-b273-64041b01342a] Running
	I0111 09:05:43.356037  773124 system_pods.go:89] "kube-controller-manager-no-preload-236664" [429a4174-5009-493d-b016-6cb0e5c4779c] Running
	I0111 09:05:43.356042  773124 system_pods.go:89] "kube-proxy-fzn6d" [ebbd59c7-c087-48ed-9d3a-aab1a6c47aab] Running
	I0111 09:05:43.356047  773124 system_pods.go:89] "kube-scheduler-no-preload-236664" [4e3b1490-bf36-4093-a691-c7b17ddd3761] Running
	I0111 09:05:43.356054  773124 system_pods.go:89] "storage-provisioner" [882fc5e2-1706-42f4-90e2-9b77dfefb288] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:05:43.356085  773124 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0111 09:05:43.582117  773124 system_pods.go:86] 8 kube-system pods found
	I0111 09:05:43.582240  773124 system_pods.go:89] "coredns-7d764666f9-klbbk" [80992683-bfe3-4e82-9b11-b7fbb5d78563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:05:43.582252  773124 system_pods.go:89] "etcd-no-preload-236664" [0f619fb0-29f6-48d4-aecb-6037e3eefea7] Running
	I0111 09:05:43.582260  773124 system_pods.go:89] "kindnet-qp4zr" [93ff9ed5-c418-43c6-9661-20274d61d8a0] Running
	I0111 09:05:43.582266  773124 system_pods.go:89] "kube-apiserver-no-preload-236664" [e14eb11c-fffc-4ceb-b273-64041b01342a] Running
	I0111 09:05:43.582271  773124 system_pods.go:89] "kube-controller-manager-no-preload-236664" [429a4174-5009-493d-b016-6cb0e5c4779c] Running
	I0111 09:05:43.582284  773124 system_pods.go:89] "kube-proxy-fzn6d" [ebbd59c7-c087-48ed-9d3a-aab1a6c47aab] Running
	I0111 09:05:43.582292  773124 system_pods.go:89] "kube-scheduler-no-preload-236664" [4e3b1490-bf36-4093-a691-c7b17ddd3761] Running
	I0111 09:05:43.582298  773124 system_pods.go:89] "storage-provisioner" [882fc5e2-1706-42f4-90e2-9b77dfefb288] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:05:43.909026  773124 system_pods.go:86] 8 kube-system pods found
	I0111 09:05:43.909065  773124 system_pods.go:89] "coredns-7d764666f9-klbbk" [80992683-bfe3-4e82-9b11-b7fbb5d78563] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:05:43.909073  773124 system_pods.go:89] "etcd-no-preload-236664" [0f619fb0-29f6-48d4-aecb-6037e3eefea7] Running
	I0111 09:05:43.909101  773124 system_pods.go:89] "kindnet-qp4zr" [93ff9ed5-c418-43c6-9661-20274d61d8a0] Running
	I0111 09:05:43.909114  773124 system_pods.go:89] "kube-apiserver-no-preload-236664" [e14eb11c-fffc-4ceb-b273-64041b01342a] Running
	I0111 09:05:43.909120  773124 system_pods.go:89] "kube-controller-manager-no-preload-236664" [429a4174-5009-493d-b016-6cb0e5c4779c] Running
	I0111 09:05:43.909125  773124 system_pods.go:89] "kube-proxy-fzn6d" [ebbd59c7-c087-48ed-9d3a-aab1a6c47aab] Running
	I0111 09:05:43.909130  773124 system_pods.go:89] "kube-scheduler-no-preload-236664" [4e3b1490-bf36-4093-a691-c7b17ddd3761] Running
	I0111 09:05:43.909136  773124 system_pods.go:89] "storage-provisioner" [882fc5e2-1706-42f4-90e2-9b77dfefb288] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:05:44.330715  773124 system_pods.go:86] 8 kube-system pods found
	I0111 09:05:44.330748  773124 system_pods.go:89] "coredns-7d764666f9-klbbk" [80992683-bfe3-4e82-9b11-b7fbb5d78563] Running
	I0111 09:05:44.330756  773124 system_pods.go:89] "etcd-no-preload-236664" [0f619fb0-29f6-48d4-aecb-6037e3eefea7] Running
	I0111 09:05:44.330761  773124 system_pods.go:89] "kindnet-qp4zr" [93ff9ed5-c418-43c6-9661-20274d61d8a0] Running
	I0111 09:05:44.330766  773124 system_pods.go:89] "kube-apiserver-no-preload-236664" [e14eb11c-fffc-4ceb-b273-64041b01342a] Running
	I0111 09:05:44.330775  773124 system_pods.go:89] "kube-controller-manager-no-preload-236664" [429a4174-5009-493d-b016-6cb0e5c4779c] Running
	I0111 09:05:44.330781  773124 system_pods.go:89] "kube-proxy-fzn6d" [ebbd59c7-c087-48ed-9d3a-aab1a6c47aab] Running
	I0111 09:05:44.330785  773124 system_pods.go:89] "kube-scheduler-no-preload-236664" [4e3b1490-bf36-4093-a691-c7b17ddd3761] Running
	I0111 09:05:44.330790  773124 system_pods.go:89] "storage-provisioner" [882fc5e2-1706-42f4-90e2-9b77dfefb288] Running
	I0111 09:05:44.330797  773124 system_pods.go:126] duration metric: took 982.898481ms to wait for k8s-apps to be running ...
	I0111 09:05:44.330810  773124 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:05:44.330869  773124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:05:44.344761  773124 system_svc.go:56] duration metric: took 13.940242ms WaitForService to wait for kubelet
	I0111 09:05:44.344794  773124 kubeadm.go:587] duration metric: took 15.085992648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:05:44.344815  773124 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:05:44.347592  773124 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:05:44.347630  773124 node_conditions.go:123] node cpu capacity is 2
	I0111 09:05:44.347644  773124 node_conditions.go:105] duration metric: took 2.824242ms to run NodePressure ...
	I0111 09:05:44.347659  773124 start.go:242] waiting for startup goroutines ...
	I0111 09:05:44.347667  773124 start.go:247] waiting for cluster config update ...
	I0111 09:05:44.347679  773124 start.go:256] writing updated cluster config ...
	I0111 09:05:44.347972  773124 ssh_runner.go:195] Run: rm -f paused
	I0111 09:05:44.353004  773124 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:05:44.356451  773124 pod_ready.go:83] waiting for pod "coredns-7d764666f9-klbbk" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:44.361587  773124 pod_ready.go:94] pod "coredns-7d764666f9-klbbk" is "Ready"
	I0111 09:05:44.361615  773124 pod_ready.go:86] duration metric: took 5.13253ms for pod "coredns-7d764666f9-klbbk" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:44.364348  773124 pod_ready.go:83] waiting for pod "etcd-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:44.369302  773124 pod_ready.go:94] pod "etcd-no-preload-236664" is "Ready"
	I0111 09:05:44.369331  773124 pod_ready.go:86] duration metric: took 4.913433ms for pod "etcd-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:44.372279  773124 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:44.377588  773124 pod_ready.go:94] pod "kube-apiserver-no-preload-236664" is "Ready"
	I0111 09:05:44.377617  773124 pod_ready.go:86] duration metric: took 5.311125ms for pod "kube-apiserver-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:44.380133  773124 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:44.757114  773124 pod_ready.go:94] pod "kube-controller-manager-no-preload-236664" is "Ready"
	I0111 09:05:44.757142  773124 pod_ready.go:86] duration metric: took 376.972904ms for pod "kube-controller-manager-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:44.957311  773124 pod_ready.go:83] waiting for pod "kube-proxy-fzn6d" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:45.356635  773124 pod_ready.go:94] pod "kube-proxy-fzn6d" is "Ready"
	I0111 09:05:45.356674  773124 pod_ready.go:86] duration metric: took 399.327047ms for pod "kube-proxy-fzn6d" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:45.557102  773124 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:45.957136  773124 pod_ready.go:94] pod "kube-scheduler-no-preload-236664" is "Ready"
	I0111 09:05:45.957167  773124 pod_ready.go:86] duration metric: took 400.035651ms for pod "kube-scheduler-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:05:45.957181  773124 pod_ready.go:40] duration metric: took 1.604141367s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:05:46.010520  773124 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:05:46.013522  773124 out.go:203] 
	W0111 09:05:46.016487  773124 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:05:46.019570  773124 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:05:46.023534  773124 out.go:179] * Done! kubectl is now configured to use "no-preload-236664" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 09:05:43 no-preload-236664 crio[836]: time="2026-01-11T09:05:43.450737081Z" level=info msg="Created container d99b9d3e29b11e56a0137862457aeb4dbe6ed207c478362cda0e3a0bcb177779: kube-system/coredns-7d764666f9-klbbk/coredns" id=91f65e29-be1e-4847-8c98-c5befc3e7d8a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:05:43 no-preload-236664 crio[836]: time="2026-01-11T09:05:43.452233363Z" level=info msg="Starting container: d99b9d3e29b11e56a0137862457aeb4dbe6ed207c478362cda0e3a0bcb177779" id=d9afc648-6456-4b48-a473-bf69751c6bf7 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:05:43 no-preload-236664 crio[836]: time="2026-01-11T09:05:43.457840836Z" level=info msg="Started container" PID=2428 containerID=d99b9d3e29b11e56a0137862457aeb4dbe6ed207c478362cda0e3a0bcb177779 description=kube-system/coredns-7d764666f9-klbbk/coredns id=d9afc648-6456-4b48-a473-bf69751c6bf7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fd6c1a439d505ccad132b497f368b454f0643cf5903000d81578e89af357e1a
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.538688572Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bac417d0-7827-4666-94f3-492a80660c92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.538775712Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.544541595Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:48b5e6c5561f1d8cab8210fa0ca9109f121858c59d0c19f27224447f2b963b41 UID:544a96f5-758e-43eb-b70f-1c53d81f1687 NetNS:/var/run/netns/d75493fb-39ba-4ef5-b459-b372627c24fe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000117b18}] Aliases:map[]}"
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.544737955Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.559003328Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:48b5e6c5561f1d8cab8210fa0ca9109f121858c59d0c19f27224447f2b963b41 UID:544a96f5-758e-43eb-b70f-1c53d81f1687 NetNS:/var/run/netns/d75493fb-39ba-4ef5-b459-b372627c24fe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000117b18}] Aliases:map[]}"
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.559160935Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.561710916Z" level=info msg="Ran pod sandbox 48b5e6c5561f1d8cab8210fa0ca9109f121858c59d0c19f27224447f2b963b41 with infra container: default/busybox/POD" id=bac417d0-7827-4666-94f3-492a80660c92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.56560032Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d2085cd6-87ba-40d4-a468-77adfa29d179 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.565739908Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d2085cd6-87ba-40d4-a468-77adfa29d179 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.565820614Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d2085cd6-87ba-40d4-a468-77adfa29d179 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.568376643Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4aeb3832-3715-4bb1-98fa-ae1646c5b806 name=/runtime.v1.ImageService/PullImage
	Jan 11 09:05:46 no-preload-236664 crio[836]: time="2026-01-11T09:05:46.56880069Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.634120074Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=4aeb3832-3715-4bb1-98fa-ae1646c5b806 name=/runtime.v1.ImageService/PullImage
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.635011253Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b80ca9b-edc1-41e8-87bb-a4719b1647aa name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.63688966Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd3d8093-d784-4e70-8032-798a5a9c6076 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.642411963Z" level=info msg="Creating container: default/busybox/busybox" id=1570ddaa-1da2-431f-9c95-0fd5047e4089 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.642540564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.647690514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.64816462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.668984319Z" level=info msg="Created container 27dd7238dad6275043f71a39295e71f1b3eea4588042faf88c713bf86a105af4: default/busybox/busybox" id=1570ddaa-1da2-431f-9c95-0fd5047e4089 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.671118401Z" level=info msg="Starting container: 27dd7238dad6275043f71a39295e71f1b3eea4588042faf88c713bf86a105af4" id=528b66c7-bbdc-4614-8c49-a06611336937 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:05:48 no-preload-236664 crio[836]: time="2026-01-11T09:05:48.67344534Z" level=info msg="Started container" PID=2486 containerID=27dd7238dad6275043f71a39295e71f1b3eea4588042faf88c713bf86a105af4 description=default/busybox/busybox id=528b66c7-bbdc-4614-8c49-a06611336937 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48b5e6c5561f1d8cab8210fa0ca9109f121858c59d0c19f27224447f2b963b41
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	27dd7238dad62       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   48b5e6c5561f1       busybox                                     default
	d99b9d3e29b11       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      13 seconds ago      Running             coredns                   0                   5fd6c1a439d50       coredns-7d764666f9-klbbk                    kube-system
	7ee594b5dbc52       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   c5d5b692b738b       storage-provisioner                         kube-system
	20a4fd5bbf382       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   ced9f34b0a873       kindnet-qp4zr                               kube-system
	b5288195c1312       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      27 seconds ago      Running             kube-proxy                0                   ed556524cdb34       kube-proxy-fzn6d                            kube-system
	f0544df5a90f6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      38 seconds ago      Running             kube-controller-manager   0                   71e56c0a5c3ab       kube-controller-manager-no-preload-236664   kube-system
	5dc77c3c064ca       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      38 seconds ago      Running             kube-scheduler            0                   88eed3312c498       kube-scheduler-no-preload-236664            kube-system
	55ad88cb16fcc       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      38 seconds ago      Running             kube-apiserver            0                   dfdb16603aedd       kube-apiserver-no-preload-236664            kube-system
	5db6c444a5778       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      38 seconds ago      Running             etcd                      0                   55feae9a7a308       etcd-no-preload-236664                      kube-system
	
	
	==> coredns [d99b9d3e29b11e56a0137862457aeb4dbe6ed207c478362cda0e3a0bcb177779] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36632 - 44399 "HINFO IN 5804625364259447859.1800287467261579082. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011074342s
	
	
	==> describe nodes <==
	Name:               no-preload-236664
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-236664
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=no-preload-236664
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_05_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:05:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-236664
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:05:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:05:54 +0000   Sun, 11 Jan 2026 09:05:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:05:54 +0000   Sun, 11 Jan 2026 09:05:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:05:54 +0000   Sun, 11 Jan 2026 09:05:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:05:54 +0000   Sun, 11 Jan 2026 09:05:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-236664
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                89f99f7b-845b-4e1b-9e20-91037b4226fe
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-klbbk                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-no-preload-236664                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-qp4zr                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-236664             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-236664    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-fzn6d                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-236664             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node no-preload-236664 event: Registered Node no-preload-236664 in Controller
	
	
	==> dmesg <==
	[Jan11 08:31] overlayfs: idmapped layers are currently not supported
	[Jan11 08:32] overlayfs: idmapped layers are currently not supported
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5db6c444a5778030357de56130c634090c955886aa5c4c62314b5546a65f5d9b] <==
	{"level":"info","ts":"2026-01-11T09:05:18.312269Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T09:05:19.256088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-11T09:05:19.256237Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T09:05:19.256332Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-11T09:05:19.256402Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:05:19.256443Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:05:19.257554Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:05:19.257635Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:05:19.257679Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-11T09:05:19.257714Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:05:19.258890Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-236664 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:05:19.258965Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:05:19.259137Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:05:19.259424Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:05:19.259658Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:05:19.259702Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:05:19.260786Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:05:19.262943Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:05:19.267074Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:05:19.269473Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T09:05:19.290827Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:05:19.334399Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:05:19.334539Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:05:19.334626Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T09:05:19.334716Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 09:05:56 up  3:48,  0 user,  load average: 1.00, 1.34, 1.80
	Linux no-preload-236664 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20a4fd5bbf382fe496b1a95598e3b8efdee600cf27a804c802f4cfbef7c57d09] <==
	I0111 09:05:32.245688       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:05:32.246014       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:05:32.246173       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:05:32.246234       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:05:32.246257       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:05:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:05:32.448370       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:05:32.538207       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:05:32.538317       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:05:32.538506       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 09:05:32.738416       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:05:32.738507       1 metrics.go:72] Registering metrics
	I0111 09:05:32.738612       1 controller.go:711] "Syncing nftables rules"
	I0111 09:05:42.448914       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:05:42.448972       1 main.go:301] handling current node
	I0111 09:05:52.450297       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:05:52.450400       1 main.go:301] handling current node
	
	
	==> kube-apiserver [55ad88cb16fcc5f3e36eb4067baaf8beed6fe4b7b8383e1bdd3bf0fd831cace7] <==
	I0111 09:05:21.297720       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E0111 09:05:21.303761       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0111 09:05:21.312979       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0111 09:05:21.313029       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:05:21.317453       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:05:21.338684       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:05:21.474480       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:05:21.935762       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 09:05:21.943418       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 09:05:21.943548       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:05:22.649638       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:05:22.698203       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:05:22.832901       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 09:05:22.842062       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0111 09:05:22.843478       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 09:05:22.848928       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:05:23.082381       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:05:24.047205       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:05:24.065343       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 09:05:24.077328       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 09:05:28.635681       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0111 09:05:28.839641       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:05:28.844505       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:05:28.989439       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E0111 09:05:55.349393       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35024: use of closed network connection
	
	
	==> kube-controller-manager [f0544df5a90f6807f10295506347f8d524b49a5e73d7364f761700f150b43e27] <==
	I0111 09:05:27.907684       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0111 09:05:27.907689       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:05:27.907693       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.911795       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.911836       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.911986       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.912146       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.912296       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.913636       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:05:27.913927       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.917343       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.917393       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.917425       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.917619       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.918063       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.923885       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.924208       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.924455       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.931762       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.957460       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-236664" podCIDRs=["10.244.0.0/24"]
	I0111 09:05:27.993998       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:27.994116       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:05:27.994160       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:05:28.014292       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:47.894965       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [b5288195c131226e2ea5b35c09d31619af0fb82807489d6b8e92e674bfb71ed5] <==
	I0111 09:05:29.154076       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:05:29.415254       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:05:29.516067       1 shared_informer.go:377] "Caches are synced"
	I0111 09:05:29.516101       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 09:05:29.516173       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:05:29.572337       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:05:29.576539       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:05:29.582097       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:05:29.582472       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:05:29.582546       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:05:29.601740       1 config.go:200] "Starting service config controller"
	I0111 09:05:29.601757       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:05:29.601776       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:05:29.601781       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:05:29.601792       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:05:29.601797       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:05:29.602672       1 config.go:309] "Starting node config controller"
	I0111 09:05:29.602683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:05:29.602690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:05:29.702238       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 09:05:29.702258       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:05:29.702235       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5dc77c3c064ca21f51959cbc8855bfc14f5f5553a503724ae0a8783322bec5d7] <==
	E0111 09:05:21.229262       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0111 09:05:21.243168       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 09:05:21.245030       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 09:05:21.245167       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 09:05:21.245210       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 09:05:21.245266       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 09:05:21.245296       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 09:05:21.245343       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 09:05:21.245376       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 09:05:21.245406       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 09:05:21.245550       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 09:05:21.245599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 09:05:21.245636       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 09:05:21.245799       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 09:05:21.245845       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 09:05:21.245884       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 09:05:21.245929       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 09:05:21.245963       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 09:05:21.245999       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 09:05:22.131722       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 09:05:22.159872       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 09:05:22.290933       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 09:05:22.311411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 09:05:22.344763       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	I0111 09:05:22.824076       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:05:28 no-preload-236664 kubelet[1935]: I0111 09:05:28.750200    1935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93ff9ed5-c418-43c6-9661-20274d61d8a0-lib-modules\") pod \"kindnet-qp4zr\" (UID: \"93ff9ed5-c418-43c6-9661-20274d61d8a0\") " pod="kube-system/kindnet-qp4zr"
	Jan 11 09:05:28 no-preload-236664 kubelet[1935]: I0111 09:05:28.750221    1935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sc5h\" (UniqueName: \"kubernetes.io/projected/93ff9ed5-c418-43c6-9661-20274d61d8a0-kube-api-access-6sc5h\") pod \"kindnet-qp4zr\" (UID: \"93ff9ed5-c418-43c6-9661-20274d61d8a0\") " pod="kube-system/kindnet-qp4zr"
	Jan 11 09:05:28 no-preload-236664 kubelet[1935]: I0111 09:05:28.860220    1935 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 11 09:05:29 no-preload-236664 kubelet[1935]: W0111 09:05:29.019301    1935 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/crio-ced9f34b0a873529e2bb7029d2cd8aeaa15871249df8b0ee132024e4a1c4959f WatchSource:0}: Error finding container ced9f34b0a873529e2bb7029d2cd8aeaa15871249df8b0ee132024e4a1c4959f: Status 404 returned error can't find the container with id ced9f34b0a873529e2bb7029d2cd8aeaa15871249df8b0ee132024e4a1c4959f
	Jan 11 09:05:29 no-preload-236664 kubelet[1935]: E0111 09:05:29.298631    1935 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-236664" containerName="kube-controller-manager"
	Jan 11 09:05:29 no-preload-236664 kubelet[1935]: I0111 09:05:29.402825    1935 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-fzn6d" podStartSLOduration=1.402810191 podStartE2EDuration="1.402810191s" podCreationTimestamp="2026-01-11 09:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:05:29.135751795 +0000 UTC m=+5.274459095" watchObservedRunningTime="2026-01-11 09:05:29.402810191 +0000 UTC m=+5.541517507"
	Jan 11 09:05:30 no-preload-236664 kubelet[1935]: E0111 09:05:30.355786    1935 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-236664" containerName="kube-scheduler"
	Jan 11 09:05:31 no-preload-236664 kubelet[1935]: E0111 09:05:31.137369    1935 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-236664" containerName="kube-apiserver"
	Jan 11 09:05:31 no-preload-236664 kubelet[1935]: E0111 09:05:31.930317    1935 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-236664" containerName="etcd"
	Jan 11 09:05:39 no-preload-236664 kubelet[1935]: E0111 09:05:39.307925    1935 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-236664" containerName="kube-controller-manager"
	Jan 11 09:05:39 no-preload-236664 kubelet[1935]: I0111 09:05:39.321997    1935 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-qp4zr" podStartSLOduration=8.216297095 podStartE2EDuration="11.321982653s" podCreationTimestamp="2026-01-11 09:05:28 +0000 UTC" firstStartedPulling="2026-01-11 09:05:29.023445895 +0000 UTC m=+5.162153194" lastFinishedPulling="2026-01-11 09:05:32.129131452 +0000 UTC m=+8.267838752" observedRunningTime="2026-01-11 09:05:33.170043756 +0000 UTC m=+9.308751064" watchObservedRunningTime="2026-01-11 09:05:39.321982653 +0000 UTC m=+15.460689953"
	Jan 11 09:05:40 no-preload-236664 kubelet[1935]: E0111 09:05:40.364166    1935 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-236664" containerName="kube-scheduler"
	Jan 11 09:05:41 no-preload-236664 kubelet[1935]: E0111 09:05:41.147845    1935 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-236664" containerName="kube-apiserver"
	Jan 11 09:05:41 no-preload-236664 kubelet[1935]: E0111 09:05:41.931290    1935 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-236664" containerName="etcd"
	Jan 11 09:05:42 no-preload-236664 kubelet[1935]: I0111 09:05:42.977641    1935 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 11 09:05:43 no-preload-236664 kubelet[1935]: I0111 09:05:43.061104    1935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/882fc5e2-1706-42f4-90e2-9b77dfefb288-tmp\") pod \"storage-provisioner\" (UID: \"882fc5e2-1706-42f4-90e2-9b77dfefb288\") " pod="kube-system/storage-provisioner"
	Jan 11 09:05:43 no-preload-236664 kubelet[1935]: I0111 09:05:43.061156    1935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dl7p\" (UniqueName: \"kubernetes.io/projected/882fc5e2-1706-42f4-90e2-9b77dfefb288-kube-api-access-5dl7p\") pod \"storage-provisioner\" (UID: \"882fc5e2-1706-42f4-90e2-9b77dfefb288\") " pod="kube-system/storage-provisioner"
	Jan 11 09:05:43 no-preload-236664 kubelet[1935]: I0111 09:05:43.061184    1935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80992683-bfe3-4e82-9b11-b7fbb5d78563-config-volume\") pod \"coredns-7d764666f9-klbbk\" (UID: \"80992683-bfe3-4e82-9b11-b7fbb5d78563\") " pod="kube-system/coredns-7d764666f9-klbbk"
	Jan 11 09:05:43 no-preload-236664 kubelet[1935]: I0111 09:05:43.061223    1935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhb2s\" (UniqueName: \"kubernetes.io/projected/80992683-bfe3-4e82-9b11-b7fbb5d78563-kube-api-access-qhb2s\") pod \"coredns-7d764666f9-klbbk\" (UID: \"80992683-bfe3-4e82-9b11-b7fbb5d78563\") " pod="kube-system/coredns-7d764666f9-klbbk"
	Jan 11 09:05:44 no-preload-236664 kubelet[1935]: E0111 09:05:44.185596    1935 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-klbbk" containerName="coredns"
	Jan 11 09:05:44 no-preload-236664 kubelet[1935]: I0111 09:05:44.223274    1935 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.22325743 podStartE2EDuration="14.22325743s" podCreationTimestamp="2026-01-11 09:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:05:44.202294871 +0000 UTC m=+20.341002179" watchObservedRunningTime="2026-01-11 09:05:44.22325743 +0000 UTC m=+20.361964739"
	Jan 11 09:05:45 no-preload-236664 kubelet[1935]: E0111 09:05:45.188355    1935 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-klbbk" containerName="coredns"
	Jan 11 09:05:46 no-preload-236664 kubelet[1935]: E0111 09:05:46.189922    1935 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-klbbk" containerName="coredns"
	Jan 11 09:05:46 no-preload-236664 kubelet[1935]: I0111 09:05:46.228133    1935 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-klbbk" podStartSLOduration=17.228107039 podStartE2EDuration="17.228107039s" podCreationTimestamp="2026-01-11 09:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:05:44.226373467 +0000 UTC m=+20.365080775" watchObservedRunningTime="2026-01-11 09:05:46.228107039 +0000 UTC m=+22.366814339"
	Jan 11 09:05:46 no-preload-236664 kubelet[1935]: I0111 09:05:46.292536    1935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkdx\" (UniqueName: \"kubernetes.io/projected/544a96f5-758e-43eb-b70f-1c53d81f1687-kube-api-access-zbkdx\") pod \"busybox\" (UID: \"544a96f5-758e-43eb-b70f-1c53d81f1687\") " pod="default/busybox"
	
	
	==> storage-provisioner [7ee594b5dbc523fc4d0062e8a2c234b2bf4eb25da23a344220265b8bfc2f229c] <==
	I0111 09:05:43.404029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:05:43.428411       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:05:43.428473       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 09:05:43.433424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:43.442649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:05:43.442920       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:05:43.445629       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa57bb8e-53f1-4eea-8701-651adbacd6ef", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-236664_c44e4d92-fb37-425d-b087-b0b06527df57 became leader
	I0111 09:05:43.449736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-236664_c44e4d92-fb37-425d-b087-b0b06527df57!
	W0111 09:05:43.531523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:43.535040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:05:43.551229       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-236664_c44e4d92-fb37-425d-b087-b0b06527df57!
	W0111 09:05:45.538804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:45.543499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:47.546594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:47.551005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:49.554100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:49.558499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:51.561753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:51.566314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:53.570348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:53.574662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:55.578430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:05:55.585884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-236664 -n no-preload-236664
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-236664 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-236664 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-236664 --alsologtostderr -v=1: exit status 80 (1.818336114s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-236664 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 09:07:11.105160  780321 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:07:11.105327  780321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:07:11.105348  780321 out.go:374] Setting ErrFile to fd 2...
	I0111 09:07:11.105375  780321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:07:11.105704  780321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:07:11.106007  780321 out.go:368] Setting JSON to false
	I0111 09:07:11.106028  780321 mustload.go:66] Loading cluster: no-preload-236664
	I0111 09:07:11.106504  780321 config.go:182] Loaded profile config "no-preload-236664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:07:11.107080  780321 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:07:11.128337  780321 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:07:11.128686  780321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:07:11.187361  780321 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-11 09:07:11.177164326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:07:11.188032  780321 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-236664 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 09:07:11.191799  780321 out.go:179] * Pausing node no-preload-236664 ... 
	I0111 09:07:11.199652  780321 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:07:11.200020  780321 ssh_runner.go:195] Run: systemctl --version
	I0111 09:07:11.200060  780321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:07:11.218306  780321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:07:11.324894  780321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:07:11.348685  780321 pause.go:52] kubelet running: true
	I0111 09:07:11.348756  780321 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:07:11.617425  780321 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:07:11.617524  780321 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:07:11.697663  780321 cri.go:96] found id: "b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9"
	I0111 09:07:11.697685  780321 cri.go:96] found id: "6a2d81e48ccb6d3fbc670096e077e9460cb9fdaebb6524dc50b18ca4f7bdc024"
	I0111 09:07:11.697692  780321 cri.go:96] found id: "3ed4c1f24cb00260799431425b62ddf25a672a12028fcd8996c2247b447e0b01"
	I0111 09:07:11.697696  780321 cri.go:96] found id: "d42e646528fe412e4b2f31ce0b419736e4a9a98cedde1b525ef43c4b84bdd437"
	I0111 09:07:11.697699  780321 cri.go:96] found id: "34a556d5cd8cc4b1cc7da4d590e25b5f9036f3794393d4a77c3fd96b8e767c7d"
	I0111 09:07:11.697702  780321 cri.go:96] found id: "330e32f7eadb9313968c7bb510089b7831588db3d8cf94a3fabbcbd17728ceb4"
	I0111 09:07:11.697706  780321 cri.go:96] found id: "7df07d00052022e60d6b9a41c00fa011c068566dbbd08a0a3c864f5b97024f9b"
	I0111 09:07:11.697709  780321 cri.go:96] found id: "db3b7cd2ab7a3576a39c22e1ecfa88bcca60f27168a7647d118e735330714d86"
	I0111 09:07:11.697712  780321 cri.go:96] found id: "2e5ccb5388ffb7117083cc27353adb4a2c137a7141f3cd18699f0c1f048c7e6a"
	I0111 09:07:11.697718  780321 cri.go:96] found id: "84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9"
	I0111 09:07:11.697722  780321 cri.go:96] found id: "5f9e5b6974decd32e8f4aa12c584d870ee987483bb8f4fc519b1b323595fa69b"
	I0111 09:07:11.697725  780321 cri.go:96] found id: ""
	I0111 09:07:11.697775  780321 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:07:11.709882  780321 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:07:11Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:07:12.055483  780321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:07:12.069133  780321 pause.go:52] kubelet running: false
	I0111 09:07:12.069203  780321 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:07:12.235045  780321 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:07:12.235132  780321 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:07:12.303859  780321 cri.go:96] found id: "b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9"
	I0111 09:07:12.303884  780321 cri.go:96] found id: "6a2d81e48ccb6d3fbc670096e077e9460cb9fdaebb6524dc50b18ca4f7bdc024"
	I0111 09:07:12.303899  780321 cri.go:96] found id: "3ed4c1f24cb00260799431425b62ddf25a672a12028fcd8996c2247b447e0b01"
	I0111 09:07:12.303904  780321 cri.go:96] found id: "d42e646528fe412e4b2f31ce0b419736e4a9a98cedde1b525ef43c4b84bdd437"
	I0111 09:07:12.303908  780321 cri.go:96] found id: "34a556d5cd8cc4b1cc7da4d590e25b5f9036f3794393d4a77c3fd96b8e767c7d"
	I0111 09:07:12.303912  780321 cri.go:96] found id: "330e32f7eadb9313968c7bb510089b7831588db3d8cf94a3fabbcbd17728ceb4"
	I0111 09:07:12.303919  780321 cri.go:96] found id: "7df07d00052022e60d6b9a41c00fa011c068566dbbd08a0a3c864f5b97024f9b"
	I0111 09:07:12.303923  780321 cri.go:96] found id: "db3b7cd2ab7a3576a39c22e1ecfa88bcca60f27168a7647d118e735330714d86"
	I0111 09:07:12.303927  780321 cri.go:96] found id: "2e5ccb5388ffb7117083cc27353adb4a2c137a7141f3cd18699f0c1f048c7e6a"
	I0111 09:07:12.303933  780321 cri.go:96] found id: "84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9"
	I0111 09:07:12.303938  780321 cri.go:96] found id: "5f9e5b6974decd32e8f4aa12c584d870ee987483bb8f4fc519b1b323595fa69b"
	I0111 09:07:12.303941  780321 cri.go:96] found id: ""
	I0111 09:07:12.303991  780321 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:07:12.589749  780321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:07:12.602784  780321 pause.go:52] kubelet running: false
	I0111 09:07:12.602902  780321 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:07:12.767311  780321 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:07:12.767393  780321 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:07:12.840122  780321 cri.go:96] found id: "b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9"
	I0111 09:07:12.840213  780321 cri.go:96] found id: "6a2d81e48ccb6d3fbc670096e077e9460cb9fdaebb6524dc50b18ca4f7bdc024"
	I0111 09:07:12.840225  780321 cri.go:96] found id: "3ed4c1f24cb00260799431425b62ddf25a672a12028fcd8996c2247b447e0b01"
	I0111 09:07:12.840232  780321 cri.go:96] found id: "d42e646528fe412e4b2f31ce0b419736e4a9a98cedde1b525ef43c4b84bdd437"
	I0111 09:07:12.840236  780321 cri.go:96] found id: "34a556d5cd8cc4b1cc7da4d590e25b5f9036f3794393d4a77c3fd96b8e767c7d"
	I0111 09:07:12.840239  780321 cri.go:96] found id: "330e32f7eadb9313968c7bb510089b7831588db3d8cf94a3fabbcbd17728ceb4"
	I0111 09:07:12.840242  780321 cri.go:96] found id: "7df07d00052022e60d6b9a41c00fa011c068566dbbd08a0a3c864f5b97024f9b"
	I0111 09:07:12.840245  780321 cri.go:96] found id: "db3b7cd2ab7a3576a39c22e1ecfa88bcca60f27168a7647d118e735330714d86"
	I0111 09:07:12.840248  780321 cri.go:96] found id: "2e5ccb5388ffb7117083cc27353adb4a2c137a7141f3cd18699f0c1f048c7e6a"
	I0111 09:07:12.840254  780321 cri.go:96] found id: "84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9"
	I0111 09:07:12.840258  780321 cri.go:96] found id: "5f9e5b6974decd32e8f4aa12c584d870ee987483bb8f4fc519b1b323595fa69b"
	I0111 09:07:12.840261  780321 cri.go:96] found id: ""
	I0111 09:07:12.840314  780321 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:07:12.855354  780321 out.go:203] 
	W0111 09:07:12.858360  780321 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:07:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:07:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 09:07:12.858385  780321 out.go:285] * 
	* 
	W0111 09:07:12.863261  780321 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 09:07:12.866487  780321 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-236664 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-236664
helpers_test.go:244: (dbg) docker inspect no-preload-236664:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7",
	        "Created": "2026-01-11T09:04:51.004254013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 777735,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:06:10.342595011Z",
	            "FinishedAt": "2026-01-11T09:06:09.531070554Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/hosts",
	        "LogPath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7-json.log",
	        "Name": "/no-preload-236664",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-236664:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-236664",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7",
	                "LowerDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-236664",
	                "Source": "/var/lib/docker/volumes/no-preload-236664/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-236664",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-236664",
	                "name.minikube.sigs.k8s.io": "no-preload-236664",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "46675f2177fa46a1f7fb9bbb91b3b6993f5dada2b7c09b68186666ddb3dd5c7d",
	            "SandboxKey": "/var/run/docker/netns/46675f2177fa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-236664": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:48:d4:60:2e:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d2de3def74a111cc7c6606a54a81f8ccf25a54c9637f0b4509f31f3903e872a",
	                    "EndpointID": "47e01db269b04713aa37c80833a92f087b80297035b0185309da20a6cb075417",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-236664",
	                        "ad25e0395513"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-236664 -n no-preload-236664
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-236664 -n no-preload-236664: exit status 2 (357.974185ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-236664 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-236664 logs -n 25: (1.275051234s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:56 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ delete  │ -p cert-expiration-448134                                                                                                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ start   │ -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-630015 │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │                     │
	│ delete  │ -p force-systemd-env-472282                                                                                                                                                                                                                   │ force-systemd-env-472282  │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:01 UTC │
	│ start   │ -p cert-options-459267 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ cert-options-459267 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ -p cert-options-459267 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ delete  │ -p cert-options-459267                                                                                                                                                                                                                        │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │                     │
	│ stop    │ -p old-k8s-version-931581 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-931581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:04 UTC │
	│ image   │ old-k8s-version-931581 image list --format=json                                                                                                                                                                                               │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ pause   │ -p old-k8s-version-931581 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │                     │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │                     │
	│ stop    │ -p no-preload-236664 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │ 11 Jan 26 09:06 UTC │
	│ addons  │ enable dashboard -p no-preload-236664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ image   │ no-preload-236664 image list --format=json                                                                                                                                                                                                    │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ pause   │ -p no-preload-236664 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:06:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:06:10.063724  777610 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:06:10.064197  777610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:06:10.064210  777610 out.go:374] Setting ErrFile to fd 2...
	I0111 09:06:10.064221  777610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:06:10.065060  777610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:06:10.065652  777610 out.go:368] Setting JSON to false
	I0111 09:06:10.066867  777610 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13720,"bootTime":1768108650,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:06:10.067084  777610 start.go:143] virtualization:  
	I0111 09:06:10.070659  777610 out.go:179] * [no-preload-236664] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:06:10.072908  777610 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:06:10.072981  777610 notify.go:221] Checking for updates...
	I0111 09:06:10.076082  777610 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:06:10.079305  777610 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:06:10.082351  777610 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:06:10.085287  777610 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:06:10.088328  777610 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:06:10.091846  777610 config.go:182] Loaded profile config "no-preload-236664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:06:10.092422  777610 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:06:10.126052  777610 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:06:10.126226  777610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:06:10.196126  777610 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:06:10.18557656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:06:10.196238  777610 docker.go:319] overlay module found
	I0111 09:06:10.199557  777610 out.go:179] * Using the docker driver based on existing profile
	I0111 09:06:10.202400  777610 start.go:309] selected driver: docker
	I0111 09:06:10.202420  777610 start.go:928] validating driver "docker" against &{Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:06:10.202525  777610 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:06:10.203310  777610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:06:10.256083  777610 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:06:10.246947919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:06:10.256431  777610 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:06:10.256467  777610 cni.go:84] Creating CNI manager for ""
	I0111 09:06:10.256524  777610 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:06:10.256573  777610 start.go:353] cluster config:
	{Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:06:10.259711  777610 out.go:179] * Starting "no-preload-236664" primary control-plane node in "no-preload-236664" cluster
	I0111 09:06:10.262493  777610 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:06:10.265531  777610 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:06:10.268378  777610 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:06:10.268414  777610 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:06:10.268517  777610 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/config.json ...
	I0111 09:06:10.268828  777610 cache.go:107] acquiring lock: {Name:mke7592fddd2045b523fca2428ddc0663b88772c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.268916  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0111 09:06:10.268928  777610 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.126µs
	I0111 09:06:10.268944  777610 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0111 09:06:10.268956  777610 cache.go:107] acquiring lock: {Name:mka93ed5255d21ece6b85aca20055b51e1583edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.268998  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0111 09:06:10.269008  777610 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 53.81µs
	I0111 09:06:10.269014  777610 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0111 09:06:10.269092  777610 cache.go:107] acquiring lock: {Name:mk3e1f7f5f36f7e3b242ff5d86252009cd03b858 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269135  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0111 09:06:10.269141  777610 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 51.685µs
	I0111 09:06:10.269147  777610 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0111 09:06:10.269156  777610 cache.go:107] acquiring lock: {Name:mk17b9d3288a8c36f55558137618c53fb114bff4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269183  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0111 09:06:10.269188  777610 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 32.304µs
	I0111 09:06:10.269193  777610 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0111 09:06:10.269202  777610 cache.go:107] acquiring lock: {Name:mk3545fa2d0a8ca45b860e43eaaa700d6213211e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269231  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0111 09:06:10.269236  777610 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 35.094µs
	I0111 09:06:10.269242  777610 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0111 09:06:10.269250  777610 cache.go:107] acquiring lock: {Name:mkbecbc2e8fbcc821087042d95b724409aa47662 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269275  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I0111 09:06:10.269279  777610 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 30.13µs
	I0111 09:06:10.269285  777610 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0111 09:06:10.269293  777610 cache.go:107] acquiring lock: {Name:mke213d3c5eada4cb2452801d6ba8056e0c2260a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269319  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0111 09:06:10.269328  777610 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.664µs
	I0111 09:06:10.269334  777610 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0111 09:06:10.269024  777610 cache.go:107] acquiring lock: {Name:mk1920546e4d844033ab047e82c06a7f1485d45d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269438  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0111 09:06:10.269445  777610 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 422.471µs
	I0111 09:06:10.269451  777610 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0111 09:06:10.269459  777610 cache.go:87] Successfully saved all images to host disk.
	I0111 09:06:10.288899  777610 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:06:10.288922  777610 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:06:10.288939  777610 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:06:10.288971  777610 start.go:360] acquireMachinesLock for no-preload-236664: {Name:mk79de85616a4c1001da7e12d7ef8a42711def92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.289031  777610 start.go:364] duration metric: took 39.016µs to acquireMachinesLock for "no-preload-236664"
	I0111 09:06:10.289057  777610 start.go:96] Skipping create...Using existing machine configuration
	I0111 09:06:10.289067  777610 fix.go:54] fixHost starting: 
	I0111 09:06:10.289343  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:10.306630  777610 fix.go:112] recreateIfNeeded on no-preload-236664: state=Stopped err=<nil>
	W0111 09:06:10.306662  777610 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 09:06:10.311886  777610 out.go:252] * Restarting existing docker container for "no-preload-236664" ...
	I0111 09:06:10.311976  777610 cli_runner.go:164] Run: docker start no-preload-236664
	I0111 09:06:10.585276  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:10.605767  777610 kic.go:430] container "no-preload-236664" state is running.
	I0111 09:06:10.606326  777610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-236664
	I0111 09:06:10.628240  777610 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/config.json ...
	I0111 09:06:10.628508  777610 machine.go:94] provisionDockerMachine start ...
	I0111 09:06:10.628581  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:10.651003  777610 main.go:144] libmachine: Using SSH client type: native
	I0111 09:06:10.651335  777610 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0111 09:06:10.651345  777610 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:06:10.653995  777610 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54458->127.0.0.1:33798: read: connection reset by peer
	I0111 09:06:13.801645  777610 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-236664
	
	I0111 09:06:13.801676  777610 ubuntu.go:182] provisioning hostname "no-preload-236664"
	I0111 09:06:13.801754  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:13.819905  777610 main.go:144] libmachine: Using SSH client type: native
	I0111 09:06:13.820219  777610 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0111 09:06:13.820236  777610 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-236664 && echo "no-preload-236664" | sudo tee /etc/hostname
	I0111 09:06:13.975614  777610 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-236664
	
	I0111 09:06:13.975744  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:13.997790  777610 main.go:144] libmachine: Using SSH client type: native
	I0111 09:06:13.998120  777610 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0111 09:06:13.998163  777610 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-236664' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-236664/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-236664' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:06:14.146501  777610 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:06:14.146526  777610 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:06:14.146582  777610 ubuntu.go:190] setting up certificates
	I0111 09:06:14.146591  777610 provision.go:84] configureAuth start
	I0111 09:06:14.146655  777610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-236664
	I0111 09:06:14.168302  777610 provision.go:143] copyHostCerts
	I0111 09:06:14.168376  777610 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:06:14.168392  777610 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:06:14.168469  777610 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:06:14.168561  777610 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:06:14.168566  777610 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:06:14.168591  777610 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:06:14.168640  777610 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:06:14.168644  777610 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:06:14.168666  777610 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:06:14.168716  777610 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.no-preload-236664 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-236664]
	I0111 09:06:14.360525  777610 provision.go:177] copyRemoteCerts
	I0111 09:06:14.360619  777610 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:06:14.360663  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:14.380090  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:14.486837  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:06:14.504681  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 09:06:14.522867  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 09:06:14.540591  777610 provision.go:87] duration metric: took 393.975664ms to configureAuth
	I0111 09:06:14.540661  777610 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:06:14.540873  777610 config.go:182] Loaded profile config "no-preload-236664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:06:14.540982  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:14.559264  777610 main.go:144] libmachine: Using SSH client type: native
	I0111 09:06:14.559590  777610 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0111 09:06:14.559612  777610 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:06:14.906870  777610 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:06:14.906897  777610 machine.go:97] duration metric: took 4.278369835s to provisionDockerMachine
	I0111 09:06:14.906910  777610 start.go:293] postStartSetup for "no-preload-236664" (driver="docker")
	I0111 09:06:14.906920  777610 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:06:14.906992  777610 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:06:14.907038  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:14.928752  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:15.060622  777610 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:06:15.065404  777610 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:06:15.065470  777610 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:06:15.065488  777610 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:06:15.065581  777610 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:06:15.065702  777610 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:06:15.065861  777610 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:06:15.074940  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:06:15.094761  777610 start.go:296] duration metric: took 187.818003ms for postStartSetup
	I0111 09:06:15.094856  777610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:06:15.094905  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:15.114076  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:15.215347  777610 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:06:15.220460  777610 fix.go:56] duration metric: took 4.931385729s for fixHost
	I0111 09:06:15.220488  777610 start.go:83] releasing machines lock for "no-preload-236664", held for 4.931442608s
	I0111 09:06:15.220580  777610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-236664
	I0111 09:06:15.237926  777610 ssh_runner.go:195] Run: cat /version.json
	I0111 09:06:15.237953  777610 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:06:15.237986  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:15.238018  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:15.260198  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:15.267839  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:15.362149  777610 ssh_runner.go:195] Run: systemctl --version
	I0111 09:06:15.467873  777610 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:06:15.506600  777610 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:06:15.511191  777610 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:06:15.511289  777610 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:06:15.519444  777610 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 09:06:15.519469  777610 start.go:496] detecting cgroup driver to use...
	I0111 09:06:15.519524  777610 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:06:15.519599  777610 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:06:15.534994  777610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:06:15.547728  777610 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:06:15.547804  777610 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:06:15.563694  777610 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:06:15.577180  777610 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:06:15.692358  777610 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:06:15.820224  777610 docker.go:234] disabling docker service ...
	I0111 09:06:15.820368  777610 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:06:15.836898  777610 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:06:15.850258  777610 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:06:15.960694  777610 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:06:16.086749  777610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:06:16.100006  777610 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:06:16.114762  777610 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 09:06:16.114854  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.123692  777610 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:06:16.123771  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.133098  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.142060  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.156545  777610 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:06:16.165907  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.175026  777610 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.183459  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.192392  777610 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:06:16.200025  777610 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:06:16.207909  777610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:06:16.326335  777610 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:06:16.490467  777610 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:06:16.490583  777610 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:06:16.494900  777610 start.go:574] Will wait 60s for crictl version
	I0111 09:06:16.495005  777610 ssh_runner.go:195] Run: which crictl
	I0111 09:06:16.499985  777610 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:06:16.525816  777610 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:06:16.525937  777610 ssh_runner.go:195] Run: crio --version
	I0111 09:06:16.557267  777610 ssh_runner.go:195] Run: crio --version
	I0111 09:06:16.597663  777610 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 09:06:16.600137  777610 cli_runner.go:164] Run: docker network inspect no-preload-236664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:06:16.619466  777610 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:06:16.623341  777610 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:06:16.633249  777610 kubeadm.go:884] updating cluster {Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:06:16.633365  777610 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:06:16.633408  777610 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:06:16.674289  777610 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:06:16.674315  777610 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:06:16.674323  777610 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 09:06:16.674415  777610 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-236664 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:06:16.674506  777610 ssh_runner.go:195] Run: crio config
	I0111 09:06:16.727010  777610 cni.go:84] Creating CNI manager for ""
	I0111 09:06:16.727035  777610 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:06:16.727057  777610 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:06:16.727085  777610 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-236664 NodeName:no-preload-236664 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:06:16.727217  777610 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-236664"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:06:16.727297  777610 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 09:06:16.735185  777610 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:06:16.735285  777610 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:06:16.743053  777610 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0111 09:06:16.756640  777610 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:06:16.769245  777610 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0111 09:06:16.781800  777610 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:06:16.785496  777610 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:06:16.795577  777610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:06:16.904972  777610 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:06:16.921706  777610 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664 for IP: 192.168.85.2
	I0111 09:06:16.921728  777610 certs.go:195] generating shared ca certs ...
	I0111 09:06:16.921745  777610 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:06:16.921935  777610 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:06:16.922008  777610 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:06:16.922024  777610 certs.go:257] generating profile certs ...
	I0111 09:06:16.922149  777610 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.key
	I0111 09:06:16.922231  777610 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key.689315f2
	I0111 09:06:16.922292  777610 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.key
	I0111 09:06:16.922424  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:06:16.922478  777610 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:06:16.922494  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:06:16.922550  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:06:16.922606  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:06:16.922637  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:06:16.922708  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:06:16.923345  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:06:16.948500  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:06:16.967786  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:06:16.986909  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:06:17.008932  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0111 09:06:17.035366  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 09:06:17.057341  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:06:17.080606  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 09:06:17.101466  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:06:17.121474  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:06:17.142364  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:06:17.162067  777610 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:06:17.176228  777610 ssh_runner.go:195] Run: openssl version
	I0111 09:06:17.190562  777610 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:06:17.200031  777610 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:06:17.208011  777610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:06:17.213263  777610 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:06:17.213378  777610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:06:17.259456  777610 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:06:17.266907  777610 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:06:17.274253  777610 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:06:17.281846  777610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:06:17.285869  777610 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:06:17.285937  777610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:06:17.327927  777610 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:06:17.335534  777610 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:06:17.343062  777610 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:06:17.350800  777610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:06:17.355967  777610 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:06:17.356077  777610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:06:17.397579  777610 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:06:17.405330  777610 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:06:17.409451  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 09:06:17.451477  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 09:06:17.495110  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 09:06:17.562234  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 09:06:17.623089  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 09:06:17.717144  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 09:06:17.780913  777610 kubeadm.go:401] StartCluster: {Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:06:17.781044  777610 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:06:17.781147  777610 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:06:17.827152  777610 cri.go:96] found id: "330e32f7eadb9313968c7bb510089b7831588db3d8cf94a3fabbcbd17728ceb4"
	I0111 09:06:17.827216  777610 cri.go:96] found id: "7df07d00052022e60d6b9a41c00fa011c068566dbbd08a0a3c864f5b97024f9b"
	I0111 09:06:17.827235  777610 cri.go:96] found id: "db3b7cd2ab7a3576a39c22e1ecfa88bcca60f27168a7647d118e735330714d86"
	I0111 09:06:17.827255  777610 cri.go:96] found id: "2e5ccb5388ffb7117083cc27353adb4a2c137a7141f3cd18699f0c1f048c7e6a"
	I0111 09:06:17.827297  777610 cri.go:96] found id: ""
	I0111 09:06:17.827367  777610 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 09:06:17.849681  777610 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:06:17Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:06:17.849803  777610 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:06:17.875007  777610 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 09:06:17.875079  777610 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 09:06:17.875164  777610 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 09:06:17.884263  777610 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 09:06:17.884728  777610 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-236664" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:06:17.884892  777610 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-575040/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-236664" cluster setting kubeconfig missing "no-preload-236664" context setting]
	I0111 09:06:17.885227  777610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:06:17.886628  777610 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 09:06:17.895570  777610 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0111 09:06:17.895643  777610 kubeadm.go:602] duration metric: took 20.543649ms to restartPrimaryControlPlane
	I0111 09:06:17.895668  777610 kubeadm.go:403] duration metric: took 114.764409ms to StartCluster
	I0111 09:06:17.895719  777610 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:06:17.895797  777610 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:06:17.896497  777610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:06:17.896765  777610 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:06:17.897118  777610 config.go:182] Loaded profile config "no-preload-236664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:06:17.897254  777610 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:06:17.897393  777610 addons.go:70] Setting storage-provisioner=true in profile "no-preload-236664"
	I0111 09:06:17.897423  777610 addons.go:239] Setting addon storage-provisioner=true in "no-preload-236664"
	W0111 09:06:17.897434  777610 addons.go:248] addon storage-provisioner should already be in state true
	I0111 09:06:17.897446  777610 addons.go:70] Setting dashboard=true in profile "no-preload-236664"
	I0111 09:06:17.897473  777610 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:06:17.897479  777610 addons.go:239] Setting addon dashboard=true in "no-preload-236664"
	W0111 09:06:17.897516  777610 addons.go:248] addon dashboard should already be in state true
	I0111 09:06:17.897548  777610 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:06:17.898020  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:17.898069  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:17.899206  777610 addons.go:70] Setting default-storageclass=true in profile "no-preload-236664"
	I0111 09:06:17.899232  777610 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-236664"
	I0111 09:06:17.899594  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:17.910361  777610 out.go:179] * Verifying Kubernetes components...
	I0111 09:06:17.926239  777610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:06:17.956215  777610 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 09:06:17.956290  777610 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:06:17.959216  777610 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:06:17.959264  777610 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:06:17.959329  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:17.968403  777610 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 09:06:17.978242  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 09:06:17.978266  777610 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 09:06:17.978345  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:17.979522  777610 addons.go:239] Setting addon default-storageclass=true in "no-preload-236664"
	W0111 09:06:17.979548  777610 addons.go:248] addon default-storageclass should already be in state true
	I0111 09:06:17.979577  777610 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:06:17.980007  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:18.017907  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:18.020698  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:18.035720  777610 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:06:18.035746  777610 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:06:18.035809  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:18.069028  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:18.277712  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 09:06:18.277737  777610 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 09:06:18.330152  777610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:06:18.347134  777610 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:06:18.348133  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 09:06:18.348203  777610 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 09:06:18.409403  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 09:06:18.409431  777610 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 09:06:18.444053  777610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:06:18.483547  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 09:06:18.483619  777610 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 09:06:18.533582  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 09:06:18.533646  777610 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 09:06:18.603872  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 09:06:18.603947  777610 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 09:06:18.647229  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 09:06:18.647306  777610 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 09:06:18.663716  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 09:06:18.663804  777610 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 09:06:18.678230  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:06:18.678305  777610 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 09:06:18.692392  777610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:06:22.678943  777610 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.331712725s)
	I0111 09:06:22.679057  777610 node_ready.go:35] waiting up to 6m0s for node "no-preload-236664" to be "Ready" ...
	I0111 09:06:22.679531  777610 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.235450922s)
	I0111 09:06:22.680268  777610 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.350017117s)
	I0111 09:06:22.709668  777610 node_ready.go:49] node "no-preload-236664" is "Ready"
	I0111 09:06:22.709756  777610 node_ready.go:38] duration metric: took 30.667472ms for node "no-preload-236664" to be "Ready" ...
	I0111 09:06:22.709793  777610 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:06:22.709918  777610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:06:22.750382  777610 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.057873462s)
	I0111 09:06:22.753756  777610 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-236664 addons enable metrics-server
	
	I0111 09:06:22.757101  777610 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I0111 09:06:22.760714  777610 addons.go:530] duration metric: took 4.863453217s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0111 09:06:22.802701  777610 api_server.go:72] duration metric: took 4.905878298s to wait for apiserver process to appear ...
	I0111 09:06:22.802795  777610 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:06:22.802832  777610 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:06:22.830035  777610 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 09:06:22.830137  777610 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 09:06:23.303760  777610 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:06:23.312405  777610 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 09:06:23.313546  777610 api_server.go:141] control plane version: v1.35.0
	I0111 09:06:23.313575  777610 api_server.go:131] duration metric: took 510.758461ms to wait for apiserver health ...
	I0111 09:06:23.313585  777610 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:06:23.317397  777610 system_pods.go:59] 8 kube-system pods found
	I0111 09:06:23.317435  777610 system_pods.go:61] "coredns-7d764666f9-klbbk" [80992683-bfe3-4e82-9b11-b7fbb5d78563] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:06:23.317456  777610 system_pods.go:61] "etcd-no-preload-236664" [0f619fb0-29f6-48d4-aecb-6037e3eefea7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:06:23.317465  777610 system_pods.go:61] "kindnet-qp4zr" [93ff9ed5-c418-43c6-9661-20274d61d8a0] Running
	I0111 09:06:23.317473  777610 system_pods.go:61] "kube-apiserver-no-preload-236664" [e14eb11c-fffc-4ceb-b273-64041b01342a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:06:23.317481  777610 system_pods.go:61] "kube-controller-manager-no-preload-236664" [429a4174-5009-493d-b016-6cb0e5c4779c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:06:23.317486  777610 system_pods.go:61] "kube-proxy-fzn6d" [ebbd59c7-c087-48ed-9d3a-aab1a6c47aab] Running
	I0111 09:06:23.317492  777610 system_pods.go:61] "kube-scheduler-no-preload-236664" [4e3b1490-bf36-4093-a691-c7b17ddd3761] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:06:23.317496  777610 system_pods.go:61] "storage-provisioner" [882fc5e2-1706-42f4-90e2-9b77dfefb288] Running
	I0111 09:06:23.317502  777610 system_pods.go:74] duration metric: took 3.911344ms to wait for pod list to return data ...
	I0111 09:06:23.317510  777610 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:06:23.320438  777610 default_sa.go:45] found service account: "default"
	I0111 09:06:23.320462  777610 default_sa.go:55] duration metric: took 2.946688ms for default service account to be created ...
	I0111 09:06:23.320472  777610 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:06:23.323898  777610 system_pods.go:86] 8 kube-system pods found
	I0111 09:06:23.323980  777610 system_pods.go:89] "coredns-7d764666f9-klbbk" [80992683-bfe3-4e82-9b11-b7fbb5d78563] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:06:23.324007  777610 system_pods.go:89] "etcd-no-preload-236664" [0f619fb0-29f6-48d4-aecb-6037e3eefea7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:06:23.324043  777610 system_pods.go:89] "kindnet-qp4zr" [93ff9ed5-c418-43c6-9661-20274d61d8a0] Running
	I0111 09:06:23.324076  777610 system_pods.go:89] "kube-apiserver-no-preload-236664" [e14eb11c-fffc-4ceb-b273-64041b01342a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:06:23.324099  777610 system_pods.go:89] "kube-controller-manager-no-preload-236664" [429a4174-5009-493d-b016-6cb0e5c4779c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:06:23.324136  777610 system_pods.go:89] "kube-proxy-fzn6d" [ebbd59c7-c087-48ed-9d3a-aab1a6c47aab] Running
	I0111 09:06:23.324161  777610 system_pods.go:89] "kube-scheduler-no-preload-236664" [4e3b1490-bf36-4093-a691-c7b17ddd3761] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:06:23.324182  777610 system_pods.go:89] "storage-provisioner" [882fc5e2-1706-42f4-90e2-9b77dfefb288] Running
	I0111 09:06:23.324216  777610 system_pods.go:126] duration metric: took 3.736688ms to wait for k8s-apps to be running ...
	I0111 09:06:23.324238  777610 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:06:23.324325  777610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:06:23.338226  777610 system_svc.go:56] duration metric: took 13.978946ms WaitForService to wait for kubelet
	I0111 09:06:23.338255  777610 kubeadm.go:587] duration metric: took 5.441436988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:06:23.338275  777610 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:06:23.341950  777610 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:06:23.342033  777610 node_conditions.go:123] node cpu capacity is 2
	I0111 09:06:23.342077  777610 node_conditions.go:105] duration metric: took 3.795806ms to run NodePressure ...
	I0111 09:06:23.342105  777610 start.go:242] waiting for startup goroutines ...
	I0111 09:06:23.342169  777610 start.go:247] waiting for cluster config update ...
	I0111 09:06:23.342196  777610 start.go:256] writing updated cluster config ...
	I0111 09:06:23.342531  777610 ssh_runner.go:195] Run: rm -f paused
	I0111 09:06:23.347214  777610 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:06:23.351247  777610 pod_ready.go:83] waiting for pod "coredns-7d764666f9-klbbk" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 09:06:25.357378  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:27.358464  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:29.358785  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:31.856529  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:33.857232  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:35.857446  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:38.357735  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:40.857682  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:43.356961  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:45.359351  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:47.856985  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:50.357154  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:52.857544  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:55.357368  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	I0111 09:06:57.857332  777610 pod_ready.go:94] pod "coredns-7d764666f9-klbbk" is "Ready"
	I0111 09:06:57.857363  777610 pod_ready.go:86] duration metric: took 34.506045716s for pod "coredns-7d764666f9-klbbk" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.860182  777610 pod_ready.go:83] waiting for pod "etcd-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.864843  777610 pod_ready.go:94] pod "etcd-no-preload-236664" is "Ready"
	I0111 09:06:57.864867  777610 pod_ready.go:86] duration metric: took 4.60582ms for pod "etcd-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.867526  777610 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.873042  777610 pod_ready.go:94] pod "kube-apiserver-no-preload-236664" is "Ready"
	I0111 09:06:57.873080  777610 pod_ready.go:86] duration metric: took 5.510783ms for pod "kube-apiserver-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.875545  777610 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:58.055223  777610 pod_ready.go:94] pod "kube-controller-manager-no-preload-236664" is "Ready"
	I0111 09:06:58.055254  777610 pod_ready.go:86] duration metric: took 179.68366ms for pod "kube-controller-manager-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:58.255642  777610 pod_ready.go:83] waiting for pod "kube-proxy-fzn6d" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:58.654621  777610 pod_ready.go:94] pod "kube-proxy-fzn6d" is "Ready"
	I0111 09:06:58.654649  777610 pod_ready.go:86] duration metric: took 398.981837ms for pod "kube-proxy-fzn6d" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:58.854840  777610 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:59.254702  777610 pod_ready.go:94] pod "kube-scheduler-no-preload-236664" is "Ready"
	I0111 09:06:59.254730  777610 pod_ready.go:86] duration metric: took 399.86175ms for pod "kube-scheduler-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:59.254744  777610 pod_ready.go:40] duration metric: took 35.907450517s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:06:59.307559  777610 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:06:59.310612  777610 out.go:203] 
	W0111 09:06:59.313530  777610 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:06:59.316425  777610 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:06:59.319364  777610 out.go:179] * Done! kubectl is now configured to use "no-preload-236664" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 09:06:53 no-preload-236664 crio[662]: time="2026-01-11T09:06:53.311220492Z" level=info msg="Created container b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9: kube-system/storage-provisioner/storage-provisioner" id=ebe217bd-0cd2-4b30-832e-cb43fc74b887 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:06:53 no-preload-236664 crio[662]: time="2026-01-11T09:06:53.311876549Z" level=info msg="Starting container: b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9" id=534ca1c2-8bbe-4b9b-800f-6d4dace32268 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:06:53 no-preload-236664 crio[662]: time="2026-01-11T09:06:53.314650706Z" level=info msg="Started container" PID=1682 containerID=b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9 description=kube-system/storage-provisioner/storage-provisioner id=534ca1c2-8bbe-4b9b-800f-6d4dace32268 name=/runtime.v1.RuntimeService/StartContainer sandboxID=51ce52059db1ac19b4128087a5b0def4bfdc2945ccac9198d1f5c9d215aca5af
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.860525998Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.86056512Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.864929484Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.864964824Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.869488468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.869657726Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.869732049Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.87387012Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.873901981Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.079355755Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b69e67ad-b359-4f49-9e4d-8daa22b16fca name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.080366623Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52baeebd-1f79-43d0-b7a5-a98e8f28f016 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.081339986Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr/dashboard-metrics-scraper" id=751d1495-73c4-49c1-8b84-46e51f1b217f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.081447753Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.088824644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.089360192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.106226673Z" level=info msg="Created container 84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr/dashboard-metrics-scraper" id=751d1495-73c4-49c1-8b84-46e51f1b217f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.106917431Z" level=info msg="Starting container: 84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9" id=76f7b439-6b14-4f97-8973-35059b9a45a0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.108588148Z" level=info msg="Started container" PID=1753 containerID=84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr/dashboard-metrics-scraper id=76f7b439-6b14-4f97-8973-35059b9a45a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b67f182aab2b6c812c49f71ba54cc2d60fb397ca9d7da7f2a774e834eab89423
	Jan 11 09:07:06 no-preload-236664 conmon[1751]: conmon 84bf236250d57bfed04d <ninfo>: container 1753 exited with status 1
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.320009652Z" level=info msg="Removing container: 3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780" id=42100e0f-d71f-4658-853d-49aeab12aa67 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.327846365Z" level=info msg="Error loading conmon cgroup of container 3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780: cgroup deleted" id=42100e0f-d71f-4658-853d-49aeab12aa67 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.333339982Z" level=info msg="Removed container 3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr/dashboard-metrics-scraper" id=42100e0f-d71f-4658-853d-49aeab12aa67 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	84bf236250d57       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   b67f182aab2b6       dashboard-metrics-scraper-867fb5f87b-5wjzr   kubernetes-dashboard
	b066119e65c64       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           20 seconds ago      Running             storage-provisioner         2                   51ce52059db1a       storage-provisioner                          kube-system
	5f9e5b6974dec       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago      Running             kubernetes-dashboard        0                   807bbdafb3bfa       kubernetes-dashboard-b84665fb8-s44cv         kubernetes-dashboard
	c9952368c5029       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   ddb76cb111d44       busybox                                      default
	6a2d81e48ccb6       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           51 seconds ago      Running             coredns                     1                   6d0c26a264b0b       coredns-7d764666f9-klbbk                     kube-system
	3ed4c1f24cb00       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           51 seconds ago      Running             kindnet-cni                 1                   b1f22c4a19694       kindnet-qp4zr                                kube-system
	d42e646528fe4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   51ce52059db1a       storage-provisioner                          kube-system
	34a556d5cd8cc       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           51 seconds ago      Running             kube-proxy                  1                   ffb92520b4618       kube-proxy-fzn6d                             kube-system
	330e32f7eadb9       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           56 seconds ago      Running             kube-apiserver              1                   1a4feb40647e5       kube-apiserver-no-preload-236664             kube-system
	7df07d0005202       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           56 seconds ago      Running             etcd                        1                   0c2c7bb8d8576       etcd-no-preload-236664                       kube-system
	db3b7cd2ab7a3       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           56 seconds ago      Running             kube-controller-manager     1                   56f6791700f1a       kube-controller-manager-no-preload-236664    kube-system
	2e5ccb5388ffb       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           56 seconds ago      Running             kube-scheduler              1                   a36d2fc0f7f5c       kube-scheduler-no-preload-236664             kube-system
	
	
	==> coredns [6a2d81e48ccb6d3fbc670096e077e9460cb9fdaebb6524dc50b18ca4f7bdc024] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53298 - 60322 "HINFO IN 8896566559478051865.5938670181658848141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031339399s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-236664
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-236664
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=no-preload-236664
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_05_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:05:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-236664
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:07:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:07:02 +0000   Sun, 11 Jan 2026 09:05:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:07:02 +0000   Sun, 11 Jan 2026 09:05:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:07:02 +0000   Sun, 11 Jan 2026 09:05:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:07:02 +0000   Sun, 11 Jan 2026 09:05:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-236664
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                89f99f7b-845b-4e1b-9e20-91037b4226fe
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-klbbk                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-no-preload-236664                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         110s
	  kube-system                 kindnet-qp4zr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-236664              250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-236664     200m (10%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-fzn6d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-236664              100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-5wjzr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-s44cv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node no-preload-236664 event: Registered Node no-preload-236664 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-236664 event: Registered Node no-preload-236664 in Controller
	
	
	==> dmesg <==
	[Jan11 08:32] overlayfs: idmapped layers are currently not supported
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7df07d00052022e60d6b9a41c00fa011c068566dbbd08a0a3c864f5b97024f9b] <==
	{"level":"info","ts":"2026-01-11T09:06:17.981820Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-11T09:06:17.981879Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T09:06:17.981944Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T09:06:18.015875Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:06:18.016147Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:06:18.063712Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T09:06:18.063759Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T09:06:18.118258Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:06:18.118304Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:06:18.118337Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:06:18.118348Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:06:18.118362Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:06:18.128175Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:06:18.128223Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:06:18.128244Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:06:18.128254Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:06:18.152955Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-236664 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:06:18.153001Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:06:18.153215Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:06:18.215088Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:06:18.217898Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T09:06:18.218047Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:06:18.218092Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:06:18.310221Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:06:18.323231Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:07:14 up  3:49,  0 user,  load average: 1.27, 1.38, 1.78
	Linux no-preload-236664 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3ed4c1f24cb00260799431425b62ddf25a672a12028fcd8996c2247b447e0b01] <==
	I0111 09:06:22.646924       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:06:22.647191       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:06:22.660448       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:06:22.660481       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:06:22.660500       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:06:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:06:22.853099       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:06:22.862257       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:06:22.939245       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:06:22.939396       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0111 09:06:52.853624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0111 09:06:52.938789       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0111 09:06:52.939774       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0111 09:06:52.939776       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0111 09:06:54.440235       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:06:54.440346       1 metrics.go:72] Registering metrics
	I0111 09:06:54.440617       1 controller.go:711] "Syncing nftables rules"
	I0111 09:07:02.853350       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:07:02.853410       1 main.go:301] handling current node
	I0111 09:07:12.858299       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:07:12.858332       1 main.go:301] handling current node
	
	
	==> kube-apiserver [330e32f7eadb9313968c7bb510089b7831588db3d8cf94a3fabbcbd17728ceb4] <==
	I0111 09:06:21.487691       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:21.491254       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 09:06:21.491778       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 09:06:21.497548       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 09:06:21.497669       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 09:06:21.497724       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 09:06:21.497758       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 09:06:21.497845       1 aggregator.go:187] initial CRD sync complete...
	I0111 09:06:21.497859       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 09:06:21.497865       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 09:06:21.497871       1 cache.go:39] Caches are synced for autoregister controller
	E0111 09:06:21.502918       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 09:06:21.547596       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:06:21.550946       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:06:22.115333       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:06:22.130548       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:06:22.180123       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:06:22.319945       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:06:22.423340       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:06:22.456837       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:06:22.711648       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.1.190"}
	I0111 09:06:22.727734       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.72.67"}
	I0111 09:06:24.978603       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:06:25.029699       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 09:06:25.287700       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [db3b7cd2ab7a3576a39c22e1ecfa88bcca60f27168a7647d118e735330714d86] <==
	I0111 09:06:24.617934       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-236664"
	I0111 09:06:24.617978       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0111 09:06:24.616171       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616127       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616134       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616142       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616148       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616155       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616160       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616166       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616177       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616226       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616182       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616188       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616194       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616200       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616205       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616211       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616216       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616234       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.643060       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.713884       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.714962       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.714977       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:06:24.714983       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [34a556d5cd8cc4b1cc7da4d590e25b5f9036f3794393d4a77c3fd96b8e767c7d] <==
	I0111 09:06:22.915141       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:06:23.012012       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:06:23.114031       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:23.114067       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 09:06:23.114153       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:06:23.133586       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:06:23.133643       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:06:23.137374       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:06:23.137718       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:06:23.137819       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:06:23.141375       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:06:23.141458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:06:23.141769       1 config.go:200] "Starting service config controller"
	I0111 09:06:23.141816       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:06:23.142427       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:06:23.142480       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:06:23.142992       1 config.go:309] "Starting node config controller"
	I0111 09:06:23.143054       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:06:23.143084       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:06:23.241630       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:06:23.242847       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 09:06:23.242864       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2e5ccb5388ffb7117083cc27353adb4a2c137a7141f3cd18699f0c1f048c7e6a] <==
	I0111 09:06:20.049732       1 serving.go:386] Generated self-signed cert in-memory
	W0111 09:06:21.374970       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 09:06:21.375008       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:06:21.375018       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 09:06:21.375025       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 09:06:21.495054       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 09:06:21.495085       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:06:21.504773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 09:06:21.504888       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:06:21.504900       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:06:21.504915       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 09:06:21.605797       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:06:36 no-preload-236664 kubelet[783]: E0111 09:06:36.233788     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:06:36 no-preload-236664 kubelet[783]: E0111 09:06:36.234341     783 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-236664" containerName="etcd"
	Jan 11 09:06:43 no-preload-236664 kubelet[783]: E0111 09:06:43.033308     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:06:43 no-preload-236664 kubelet[783]: I0111 09:06:43.033359     783 scope.go:122] "RemoveContainer" containerID="18bd9ed562bc872a0050c6ce8f7560a0d32de2039ff3ebd11495fb45149c93ac"
	Jan 11 09:06:43 no-preload-236664 kubelet[783]: E0111 09:06:43.033536     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: E0111 09:06:45.079597     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: I0111 09:06:45.080166     783 scope.go:122] "RemoveContainer" containerID="18bd9ed562bc872a0050c6ce8f7560a0d32de2039ff3ebd11495fb45149c93ac"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: I0111 09:06:45.259999     783 scope.go:122] "RemoveContainer" containerID="18bd9ed562bc872a0050c6ce8f7560a0d32de2039ff3ebd11495fb45149c93ac"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: E0111 09:06:45.260796     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: I0111 09:06:45.260859     783 scope.go:122] "RemoveContainer" containerID="3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: E0111 09:06:45.261286     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:06:53 no-preload-236664 kubelet[783]: E0111 09:06:53.032783     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:06:53 no-preload-236664 kubelet[783]: I0111 09:06:53.032837     783 scope.go:122] "RemoveContainer" containerID="3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780"
	Jan 11 09:06:53 no-preload-236664 kubelet[783]: E0111 09:06:53.033369     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:06:53 no-preload-236664 kubelet[783]: I0111 09:06:53.281949     783 scope.go:122] "RemoveContainer" containerID="d42e646528fe412e4b2f31ce0b419736e4a9a98cedde1b525ef43c4b84bdd437"
	Jan 11 09:06:57 no-preload-236664 kubelet[783]: E0111 09:06:57.613135     783 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-klbbk" containerName="coredns"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: E0111 09:07:06.078757     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: I0111 09:07:06.078802     783 scope.go:122] "RemoveContainer" containerID="3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: I0111 09:07:06.317845     783 scope.go:122] "RemoveContainer" containerID="3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: E0111 09:07:06.318171     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: I0111 09:07:06.318199     783 scope.go:122] "RemoveContainer" containerID="84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: E0111 09:07:06.318344     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:07:11 no-preload-236664 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:07:11 no-preload-236664 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:07:11 no-preload-236664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5f9e5b6974decd32e8f4aa12c584d870ee987483bb8f4fc519b1b323595fa69b] <==
	2026/01/11 09:06:30 Using namespace: kubernetes-dashboard
	2026/01/11 09:06:30 Using in-cluster config to connect to apiserver
	2026/01/11 09:06:30 Using secret token for csrf signing
	2026/01/11 09:06:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 09:06:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 09:06:30 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 09:06:30 Generating JWE encryption key
	2026/01/11 09:06:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 09:06:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 09:06:30 Initializing JWE encryption key from synchronized object
	2026/01/11 09:06:30 Creating in-cluster Sidecar client
	2026/01/11 09:06:30 Serving insecurely on HTTP port: 9090
	2026/01/11 09:06:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:07:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:06:30 Starting overwatch
	
	
	==> storage-provisioner [b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9] <==
	I0111 09:06:53.330586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:06:53.343178       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:06:53.343225       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 09:06:53.345320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:06:56.800171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:01.060953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:04.659586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:07.714097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:10.736251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:10.743906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:07:10.744068       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:07:10.744307       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-236664_8be3d89b-ebb4-4d41-915c-20315b4b3f3d!
	I0111 09:07:10.744783       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa57bb8e-53f1-4eea-8701-651adbacd6ef", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-236664_8be3d89b-ebb4-4d41-915c-20315b4b3f3d became leader
	W0111 09:07:10.752023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:10.770522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:07:10.844982       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-236664_8be3d89b-ebb4-4d41-915c-20315b4b3f3d!
	W0111 09:07:12.775460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:12.781690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d42e646528fe412e4b2f31ce0b419736e4a9a98cedde1b525ef43c4b84bdd437] <==
	I0111 09:06:22.669086       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 09:06:52.671538       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-236664 -n no-preload-236664
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-236664 -n no-preload-236664: exit status 2 (350.565171ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-236664 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-236664
helpers_test.go:244: (dbg) docker inspect no-preload-236664:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7",
	        "Created": "2026-01-11T09:04:51.004254013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 777735,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:06:10.342595011Z",
	            "FinishedAt": "2026-01-11T09:06:09.531070554Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/hosts",
	        "LogPath": "/var/lib/docker/containers/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7/ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7-json.log",
	        "Name": "/no-preload-236664",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-236664:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-236664",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ad25e0395513809b6cf2c51f8af5ed467fea5ea55b7f323d97a5a5955e142ad7",
	                "LowerDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ff6b15e89b8c004230ac70e5f5994d0fb6ac775714bb351b9819d6dc154f20e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-236664",
	                "Source": "/var/lib/docker/volumes/no-preload-236664/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-236664",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-236664",
	                "name.minikube.sigs.k8s.io": "no-preload-236664",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "46675f2177fa46a1f7fb9bbb91b3b6993f5dada2b7c09b68186666ddb3dd5c7d",
	            "SandboxKey": "/var/run/docker/netns/46675f2177fa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-236664": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:48:d4:60:2e:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d2de3def74a111cc7c6606a54a81f8ccf25a54c9637f0b4509f31f3903e872a",
	                    "EndpointID": "47e01db269b04713aa37c80833a92f087b80297035b0185309da20a6cb075417",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-236664",
	                        "ad25e0395513"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-236664 -n no-preload-236664
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-236664 -n no-preload-236664: exit status 2 (366.32219ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-236664 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-236664 logs -n 25: (1.252866833s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:55 UTC │ 11 Jan 26 08:56 UTC │
	│ start   │ -p cert-expiration-448134 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ delete  │ -p cert-expiration-448134                                                                                                                                                                                                                     │ cert-expiration-448134    │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │ 11 Jan 26 08:59 UTC │
	│ start   │ -p force-systemd-flag-630015 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-630015 │ jenkins │ v1.37.0 │ 11 Jan 26 08:59 UTC │                     │
	│ delete  │ -p force-systemd-env-472282                                                                                                                                                                                                                   │ force-systemd-env-472282  │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:01 UTC │
	│ start   │ -p cert-options-459267 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:01 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ cert-options-459267 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ ssh     │ -p cert-options-459267 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ delete  │ -p cert-options-459267                                                                                                                                                                                                                        │ cert-options-459267       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │                     │
	│ stop    │ -p old-k8s-version-931581 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-931581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:04 UTC │
	│ image   │ old-k8s-version-931581 image list --format=json                                                                                                                                                                                               │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ pause   │ -p old-k8s-version-931581 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │                     │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581    │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │                     │
	│ stop    │ -p no-preload-236664 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │ 11 Jan 26 09:06 UTC │
	│ addons  │ enable dashboard -p no-preload-236664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ image   │ no-preload-236664 image list --format=json                                                                                                                                                                                                    │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ pause   │ -p no-preload-236664 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-236664         │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:06:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:06:10.063724  777610 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:06:10.064197  777610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:06:10.064210  777610 out.go:374] Setting ErrFile to fd 2...
	I0111 09:06:10.064221  777610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:06:10.065060  777610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:06:10.065652  777610 out.go:368] Setting JSON to false
	I0111 09:06:10.066867  777610 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13720,"bootTime":1768108650,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:06:10.067084  777610 start.go:143] virtualization:  
	I0111 09:06:10.070659  777610 out.go:179] * [no-preload-236664] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:06:10.072908  777610 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:06:10.072981  777610 notify.go:221] Checking for updates...
	I0111 09:06:10.076082  777610 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:06:10.079305  777610 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:06:10.082351  777610 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:06:10.085287  777610 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:06:10.088328  777610 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:06:10.091846  777610 config.go:182] Loaded profile config "no-preload-236664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:06:10.092422  777610 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:06:10.126052  777610 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:06:10.126226  777610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:06:10.196126  777610 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:06:10.18557656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:06:10.196238  777610 docker.go:319] overlay module found
	I0111 09:06:10.199557  777610 out.go:179] * Using the docker driver based on existing profile
	I0111 09:06:10.202400  777610 start.go:309] selected driver: docker
	I0111 09:06:10.202420  777610 start.go:928] validating driver "docker" against &{Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:06:10.202525  777610 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:06:10.203310  777610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:06:10.256083  777610 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:06:10.246947919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:06:10.256431  777610 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:06:10.256467  777610 cni.go:84] Creating CNI manager for ""
	I0111 09:06:10.256524  777610 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:06:10.256573  777610 start.go:353] cluster config:
	{Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:06:10.259711  777610 out.go:179] * Starting "no-preload-236664" primary control-plane node in "no-preload-236664" cluster
	I0111 09:06:10.262493  777610 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:06:10.265531  777610 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:06:10.268378  777610 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:06:10.268414  777610 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:06:10.268517  777610 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/config.json ...
	I0111 09:06:10.268828  777610 cache.go:107] acquiring lock: {Name:mke7592fddd2045b523fca2428ddc0663b88772c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.268916  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0111 09:06:10.268928  777610 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.126µs
	I0111 09:06:10.268944  777610 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0111 09:06:10.268956  777610 cache.go:107] acquiring lock: {Name:mka93ed5255d21ece6b85aca20055b51e1583edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.268998  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0111 09:06:10.269008  777610 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 53.81µs
	I0111 09:06:10.269014  777610 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0111 09:06:10.269092  777610 cache.go:107] acquiring lock: {Name:mk3e1f7f5f36f7e3b242ff5d86252009cd03b858 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269135  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0111 09:06:10.269141  777610 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 51.685µs
	I0111 09:06:10.269147  777610 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0111 09:06:10.269156  777610 cache.go:107] acquiring lock: {Name:mk17b9d3288a8c36f55558137618c53fb114bff4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269183  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0111 09:06:10.269188  777610 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 32.304µs
	I0111 09:06:10.269193  777610 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0111 09:06:10.269202  777610 cache.go:107] acquiring lock: {Name:mk3545fa2d0a8ca45b860e43eaaa700d6213211e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269231  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0111 09:06:10.269236  777610 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 35.094µs
	I0111 09:06:10.269242  777610 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0111 09:06:10.269250  777610 cache.go:107] acquiring lock: {Name:mkbecbc2e8fbcc821087042d95b724409aa47662 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269275  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I0111 09:06:10.269279  777610 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 30.13µs
	I0111 09:06:10.269285  777610 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0111 09:06:10.269293  777610 cache.go:107] acquiring lock: {Name:mke213d3c5eada4cb2452801d6ba8056e0c2260a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269319  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0111 09:06:10.269328  777610 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.664µs
	I0111 09:06:10.269334  777610 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0111 09:06:10.269024  777610 cache.go:107] acquiring lock: {Name:mk1920546e4d844033ab047e82c06a7f1485d45d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.269438  777610 cache.go:115] /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0111 09:06:10.269445  777610 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 422.471µs
	I0111 09:06:10.269451  777610 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0111 09:06:10.269459  777610 cache.go:87] Successfully saved all images to host disk.
	I0111 09:06:10.288899  777610 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:06:10.288922  777610 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:06:10.288939  777610 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:06:10.288971  777610 start.go:360] acquireMachinesLock for no-preload-236664: {Name:mk79de85616a4c1001da7e12d7ef8a42711def92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:06:10.289031  777610 start.go:364] duration metric: took 39.016µs to acquireMachinesLock for "no-preload-236664"
	I0111 09:06:10.289057  777610 start.go:96] Skipping create...Using existing machine configuration
	I0111 09:06:10.289067  777610 fix.go:54] fixHost starting: 
	I0111 09:06:10.289343  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:10.306630  777610 fix.go:112] recreateIfNeeded on no-preload-236664: state=Stopped err=<nil>
	W0111 09:06:10.306662  777610 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 09:06:10.311886  777610 out.go:252] * Restarting existing docker container for "no-preload-236664" ...
	I0111 09:06:10.311976  777610 cli_runner.go:164] Run: docker start no-preload-236664
	I0111 09:06:10.585276  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:10.605767  777610 kic.go:430] container "no-preload-236664" state is running.
	I0111 09:06:10.606326  777610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-236664
	I0111 09:06:10.628240  777610 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/config.json ...
	I0111 09:06:10.628508  777610 machine.go:94] provisionDockerMachine start ...
	I0111 09:06:10.628581  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:10.651003  777610 main.go:144] libmachine: Using SSH client type: native
	I0111 09:06:10.651335  777610 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0111 09:06:10.651345  777610 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:06:10.653995  777610 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54458->127.0.0.1:33798: read: connection reset by peer
	I0111 09:06:13.801645  777610 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-236664
	
	I0111 09:06:13.801676  777610 ubuntu.go:182] provisioning hostname "no-preload-236664"
	I0111 09:06:13.801754  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:13.819905  777610 main.go:144] libmachine: Using SSH client type: native
	I0111 09:06:13.820219  777610 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0111 09:06:13.820236  777610 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-236664 && echo "no-preload-236664" | sudo tee /etc/hostname
	I0111 09:06:13.975614  777610 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-236664
	
	I0111 09:06:13.975744  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:13.997790  777610 main.go:144] libmachine: Using SSH client type: native
	I0111 09:06:13.998120  777610 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0111 09:06:13.998163  777610 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-236664' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-236664/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-236664' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:06:14.146501  777610 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:06:14.146526  777610 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:06:14.146582  777610 ubuntu.go:190] setting up certificates
	I0111 09:06:14.146591  777610 provision.go:84] configureAuth start
	I0111 09:06:14.146655  777610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-236664
	I0111 09:06:14.168302  777610 provision.go:143] copyHostCerts
	I0111 09:06:14.168376  777610 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:06:14.168392  777610 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:06:14.168469  777610 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:06:14.168561  777610 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:06:14.168566  777610 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:06:14.168591  777610 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:06:14.168640  777610 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:06:14.168644  777610 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:06:14.168666  777610 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:06:14.168716  777610 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.no-preload-236664 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-236664]
	I0111 09:06:14.360525  777610 provision.go:177] copyRemoteCerts
	I0111 09:06:14.360619  777610 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:06:14.360663  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:14.380090  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:14.486837  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:06:14.504681  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 09:06:14.522867  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 09:06:14.540591  777610 provision.go:87] duration metric: took 393.975664ms to configureAuth
	I0111 09:06:14.540661  777610 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:06:14.540873  777610 config.go:182] Loaded profile config "no-preload-236664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:06:14.540982  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:14.559264  777610 main.go:144] libmachine: Using SSH client type: native
	I0111 09:06:14.559590  777610 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0111 09:06:14.559612  777610 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:06:14.906870  777610 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:06:14.906897  777610 machine.go:97] duration metric: took 4.278369835s to provisionDockerMachine
	I0111 09:06:14.906910  777610 start.go:293] postStartSetup for "no-preload-236664" (driver="docker")
	I0111 09:06:14.906920  777610 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:06:14.906992  777610 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:06:14.907038  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:14.928752  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:15.060622  777610 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:06:15.065404  777610 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:06:15.065470  777610 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:06:15.065488  777610 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:06:15.065581  777610 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:06:15.065702  777610 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:06:15.065861  777610 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:06:15.074940  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:06:15.094761  777610 start.go:296] duration metric: took 187.818003ms for postStartSetup
	I0111 09:06:15.094856  777610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:06:15.094905  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:15.114076  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:15.215347  777610 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:06:15.220460  777610 fix.go:56] duration metric: took 4.931385729s for fixHost
	I0111 09:06:15.220488  777610 start.go:83] releasing machines lock for "no-preload-236664", held for 4.931442608s
	I0111 09:06:15.220580  777610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-236664
	I0111 09:06:15.237926  777610 ssh_runner.go:195] Run: cat /version.json
	I0111 09:06:15.237953  777610 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:06:15.237986  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:15.238018  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:15.260198  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:15.267839  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:15.362149  777610 ssh_runner.go:195] Run: systemctl --version
	I0111 09:06:15.467873  777610 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:06:15.506600  777610 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:06:15.511191  777610 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:06:15.511289  777610 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:06:15.519444  777610 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 09:06:15.519469  777610 start.go:496] detecting cgroup driver to use...
	I0111 09:06:15.519524  777610 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:06:15.519599  777610 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:06:15.534994  777610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:06:15.547728  777610 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:06:15.547804  777610 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:06:15.563694  777610 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:06:15.577180  777610 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:06:15.692358  777610 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:06:15.820224  777610 docker.go:234] disabling docker service ...
	I0111 09:06:15.820368  777610 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:06:15.836898  777610 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:06:15.850258  777610 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:06:15.960694  777610 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:06:16.086749  777610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:06:16.100006  777610 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:06:16.114762  777610 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 09:06:16.114854  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.123692  777610 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:06:16.123771  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.133098  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.142060  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.156545  777610 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:06:16.165907  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.175026  777610 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.183459  777610 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:06:16.192392  777610 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:06:16.200025  777610 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:06:16.207909  777610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:06:16.326335  777610 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:06:16.490467  777610 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:06:16.490583  777610 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:06:16.494900  777610 start.go:574] Will wait 60s for crictl version
	I0111 09:06:16.495005  777610 ssh_runner.go:195] Run: which crictl
	I0111 09:06:16.499985  777610 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:06:16.525816  777610 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:06:16.525937  777610 ssh_runner.go:195] Run: crio --version
	I0111 09:06:16.557267  777610 ssh_runner.go:195] Run: crio --version
	I0111 09:06:16.597663  777610 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 09:06:16.600137  777610 cli_runner.go:164] Run: docker network inspect no-preload-236664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:06:16.619466  777610 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:06:16.623341  777610 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:06:16.633249  777610 kubeadm.go:884] updating cluster {Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:06:16.633365  777610 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:06:16.633408  777610 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:06:16.674289  777610 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:06:16.674315  777610 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:06:16.674323  777610 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 09:06:16.674415  777610 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-236664 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:06:16.674506  777610 ssh_runner.go:195] Run: crio config
	I0111 09:06:16.727010  777610 cni.go:84] Creating CNI manager for ""
	I0111 09:06:16.727035  777610 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:06:16.727057  777610 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:06:16.727085  777610 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-236664 NodeName:no-preload-236664 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:06:16.727217  777610 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-236664"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:06:16.727297  777610 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 09:06:16.735185  777610 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:06:16.735285  777610 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:06:16.743053  777610 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0111 09:06:16.756640  777610 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:06:16.769245  777610 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0111 09:06:16.781800  777610 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:06:16.785496  777610 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:06:16.795577  777610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:06:16.904972  777610 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:06:16.921706  777610 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664 for IP: 192.168.85.2
	I0111 09:06:16.921728  777610 certs.go:195] generating shared ca certs ...
	I0111 09:06:16.921745  777610 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:06:16.921935  777610 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:06:16.922008  777610 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:06:16.922024  777610 certs.go:257] generating profile certs ...
	I0111 09:06:16.922149  777610 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.key
	I0111 09:06:16.922231  777610 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key.689315f2
	I0111 09:06:16.922292  777610 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.key
	I0111 09:06:16.922424  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:06:16.922478  777610 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:06:16.922494  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:06:16.922550  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:06:16.922606  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:06:16.922637  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:06:16.922708  777610 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:06:16.923345  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:06:16.948500  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:06:16.967786  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:06:16.986909  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:06:17.008932  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0111 09:06:17.035366  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 09:06:17.057341  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:06:17.080606  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 09:06:17.101466  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:06:17.121474  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:06:17.142364  777610 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:06:17.162067  777610 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:06:17.176228  777610 ssh_runner.go:195] Run: openssl version
	I0111 09:06:17.190562  777610 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:06:17.200031  777610 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:06:17.208011  777610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:06:17.213263  777610 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:06:17.213378  777610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:06:17.259456  777610 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:06:17.266907  777610 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:06:17.274253  777610 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:06:17.281846  777610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:06:17.285869  777610 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:06:17.285937  777610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:06:17.327927  777610 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:06:17.335534  777610 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:06:17.343062  777610 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:06:17.350800  777610 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:06:17.355967  777610 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:06:17.356077  777610 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:06:17.397579  777610 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:06:17.405330  777610 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:06:17.409451  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 09:06:17.451477  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 09:06:17.495110  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 09:06:17.562234  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 09:06:17.623089  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 09:06:17.717144  777610 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 09:06:17.780913  777610 kubeadm.go:401] StartCluster: {Name:no-preload-236664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-236664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:06:17.781044  777610 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:06:17.781147  777610 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:06:17.827152  777610 cri.go:96] found id: "330e32f7eadb9313968c7bb510089b7831588db3d8cf94a3fabbcbd17728ceb4"
	I0111 09:06:17.827216  777610 cri.go:96] found id: "7df07d00052022e60d6b9a41c00fa011c068566dbbd08a0a3c864f5b97024f9b"
	I0111 09:06:17.827235  777610 cri.go:96] found id: "db3b7cd2ab7a3576a39c22e1ecfa88bcca60f27168a7647d118e735330714d86"
	I0111 09:06:17.827255  777610 cri.go:96] found id: "2e5ccb5388ffb7117083cc27353adb4a2c137a7141f3cd18699f0c1f048c7e6a"
	I0111 09:06:17.827297  777610 cri.go:96] found id: ""
	I0111 09:06:17.827367  777610 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 09:06:17.849681  777610 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:06:17Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:06:17.849803  777610 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:06:17.875007  777610 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 09:06:17.875079  777610 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 09:06:17.875164  777610 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 09:06:17.884263  777610 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 09:06:17.884728  777610 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-236664" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:06:17.884892  777610 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-575040/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-236664" cluster setting kubeconfig missing "no-preload-236664" context setting]
	I0111 09:06:17.885227  777610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:06:17.886628  777610 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 09:06:17.895570  777610 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0111 09:06:17.895643  777610 kubeadm.go:602] duration metric: took 20.543649ms to restartPrimaryControlPlane
	I0111 09:06:17.895668  777610 kubeadm.go:403] duration metric: took 114.764409ms to StartCluster
	I0111 09:06:17.895719  777610 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:06:17.895797  777610 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:06:17.896497  777610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:06:17.896765  777610 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:06:17.897118  777610 config.go:182] Loaded profile config "no-preload-236664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:06:17.897254  777610 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:06:17.897393  777610 addons.go:70] Setting storage-provisioner=true in profile "no-preload-236664"
	I0111 09:06:17.897423  777610 addons.go:239] Setting addon storage-provisioner=true in "no-preload-236664"
	W0111 09:06:17.897434  777610 addons.go:248] addon storage-provisioner should already be in state true
	I0111 09:06:17.897446  777610 addons.go:70] Setting dashboard=true in profile "no-preload-236664"
	I0111 09:06:17.897473  777610 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:06:17.897479  777610 addons.go:239] Setting addon dashboard=true in "no-preload-236664"
	W0111 09:06:17.897516  777610 addons.go:248] addon dashboard should already be in state true
	I0111 09:06:17.897548  777610 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:06:17.898020  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:17.898069  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:17.899206  777610 addons.go:70] Setting default-storageclass=true in profile "no-preload-236664"
	I0111 09:06:17.899232  777610 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-236664"
	I0111 09:06:17.899594  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:17.910361  777610 out.go:179] * Verifying Kubernetes components...
	I0111 09:06:17.926239  777610 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:06:17.956215  777610 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 09:06:17.956290  777610 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:06:17.959216  777610 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:06:17.959264  777610 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:06:17.959329  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:17.968403  777610 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 09:06:17.978242  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 09:06:17.978266  777610 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 09:06:17.978345  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:17.979522  777610 addons.go:239] Setting addon default-storageclass=true in "no-preload-236664"
	W0111 09:06:17.979548  777610 addons.go:248] addon default-storageclass should already be in state true
	I0111 09:06:17.979577  777610 host.go:66] Checking if "no-preload-236664" exists ...
	I0111 09:06:17.980007  777610 cli_runner.go:164] Run: docker container inspect no-preload-236664 --format={{.State.Status}}
	I0111 09:06:18.017907  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:18.020698  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:18.035720  777610 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:06:18.035746  777610 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:06:18.035809  777610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-236664
	I0111 09:06:18.069028  777610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/no-preload-236664/id_rsa Username:docker}
	I0111 09:06:18.277712  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 09:06:18.277737  777610 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 09:06:18.330152  777610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:06:18.347134  777610 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:06:18.348133  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 09:06:18.348203  777610 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 09:06:18.409403  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 09:06:18.409431  777610 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 09:06:18.444053  777610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:06:18.483547  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 09:06:18.483619  777610 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 09:06:18.533582  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 09:06:18.533646  777610 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 09:06:18.603872  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 09:06:18.603947  777610 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 09:06:18.647229  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 09:06:18.647306  777610 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 09:06:18.663716  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 09:06:18.663804  777610 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 09:06:18.678230  777610 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:06:18.678305  777610 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 09:06:18.692392  777610 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:06:22.678943  777610 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.331712725s)
	I0111 09:06:22.679057  777610 node_ready.go:35] waiting up to 6m0s for node "no-preload-236664" to be "Ready" ...
	I0111 09:06:22.679531  777610 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.235450922s)
	I0111 09:06:22.680268  777610 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.350017117s)
	I0111 09:06:22.709668  777610 node_ready.go:49] node "no-preload-236664" is "Ready"
	I0111 09:06:22.709756  777610 node_ready.go:38] duration metric: took 30.667472ms for node "no-preload-236664" to be "Ready" ...
	I0111 09:06:22.709793  777610 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:06:22.709918  777610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:06:22.750382  777610 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.057873462s)
	I0111 09:06:22.753756  777610 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-236664 addons enable metrics-server
	
	I0111 09:06:22.757101  777610 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I0111 09:06:22.760714  777610 addons.go:530] duration metric: took 4.863453217s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0111 09:06:22.802701  777610 api_server.go:72] duration metric: took 4.905878298s to wait for apiserver process to appear ...
	I0111 09:06:22.802795  777610 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:06:22.802832  777610 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:06:22.830035  777610 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0111 09:06:22.830137  777610 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0111 09:06:23.303760  777610 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:06:23.312405  777610 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 09:06:23.313546  777610 api_server.go:141] control plane version: v1.35.0
	I0111 09:06:23.313575  777610 api_server.go:131] duration metric: took 510.758461ms to wait for apiserver health ...
	I0111 09:06:23.313585  777610 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:06:23.317397  777610 system_pods.go:59] 8 kube-system pods found
	I0111 09:06:23.317435  777610 system_pods.go:61] "coredns-7d764666f9-klbbk" [80992683-bfe3-4e82-9b11-b7fbb5d78563] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:06:23.317456  777610 system_pods.go:61] "etcd-no-preload-236664" [0f619fb0-29f6-48d4-aecb-6037e3eefea7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:06:23.317465  777610 system_pods.go:61] "kindnet-qp4zr" [93ff9ed5-c418-43c6-9661-20274d61d8a0] Running
	I0111 09:06:23.317473  777610 system_pods.go:61] "kube-apiserver-no-preload-236664" [e14eb11c-fffc-4ceb-b273-64041b01342a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:06:23.317481  777610 system_pods.go:61] "kube-controller-manager-no-preload-236664" [429a4174-5009-493d-b016-6cb0e5c4779c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:06:23.317486  777610 system_pods.go:61] "kube-proxy-fzn6d" [ebbd59c7-c087-48ed-9d3a-aab1a6c47aab] Running
	I0111 09:06:23.317492  777610 system_pods.go:61] "kube-scheduler-no-preload-236664" [4e3b1490-bf36-4093-a691-c7b17ddd3761] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:06:23.317496  777610 system_pods.go:61] "storage-provisioner" [882fc5e2-1706-42f4-90e2-9b77dfefb288] Running
	I0111 09:06:23.317502  777610 system_pods.go:74] duration metric: took 3.911344ms to wait for pod list to return data ...
	I0111 09:06:23.317510  777610 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:06:23.320438  777610 default_sa.go:45] found service account: "default"
	I0111 09:06:23.320462  777610 default_sa.go:55] duration metric: took 2.946688ms for default service account to be created ...
	I0111 09:06:23.320472  777610 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:06:23.323898  777610 system_pods.go:86] 8 kube-system pods found
	I0111 09:06:23.323980  777610 system_pods.go:89] "coredns-7d764666f9-klbbk" [80992683-bfe3-4e82-9b11-b7fbb5d78563] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:06:23.324007  777610 system_pods.go:89] "etcd-no-preload-236664" [0f619fb0-29f6-48d4-aecb-6037e3eefea7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:06:23.324043  777610 system_pods.go:89] "kindnet-qp4zr" [93ff9ed5-c418-43c6-9661-20274d61d8a0] Running
	I0111 09:06:23.324076  777610 system_pods.go:89] "kube-apiserver-no-preload-236664" [e14eb11c-fffc-4ceb-b273-64041b01342a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:06:23.324099  777610 system_pods.go:89] "kube-controller-manager-no-preload-236664" [429a4174-5009-493d-b016-6cb0e5c4779c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:06:23.324136  777610 system_pods.go:89] "kube-proxy-fzn6d" [ebbd59c7-c087-48ed-9d3a-aab1a6c47aab] Running
	I0111 09:06:23.324161  777610 system_pods.go:89] "kube-scheduler-no-preload-236664" [4e3b1490-bf36-4093-a691-c7b17ddd3761] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:06:23.324182  777610 system_pods.go:89] "storage-provisioner" [882fc5e2-1706-42f4-90e2-9b77dfefb288] Running
	I0111 09:06:23.324216  777610 system_pods.go:126] duration metric: took 3.736688ms to wait for k8s-apps to be running ...
	I0111 09:06:23.324238  777610 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:06:23.324325  777610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:06:23.338226  777610 system_svc.go:56] duration metric: took 13.978946ms WaitForService to wait for kubelet
	I0111 09:06:23.338255  777610 kubeadm.go:587] duration metric: took 5.441436988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:06:23.338275  777610 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:06:23.341950  777610 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:06:23.342033  777610 node_conditions.go:123] node cpu capacity is 2
	I0111 09:06:23.342077  777610 node_conditions.go:105] duration metric: took 3.795806ms to run NodePressure ...
	I0111 09:06:23.342105  777610 start.go:242] waiting for startup goroutines ...
	I0111 09:06:23.342169  777610 start.go:247] waiting for cluster config update ...
	I0111 09:06:23.342196  777610 start.go:256] writing updated cluster config ...
	I0111 09:06:23.342531  777610 ssh_runner.go:195] Run: rm -f paused
	I0111 09:06:23.347214  777610 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:06:23.351247  777610 pod_ready.go:83] waiting for pod "coredns-7d764666f9-klbbk" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 09:06:25.357378  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:27.358464  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:29.358785  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:31.856529  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:33.857232  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:35.857446  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:38.357735  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:40.857682  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:43.356961  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:45.359351  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:47.856985  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:50.357154  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:52.857544  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	W0111 09:06:55.357368  777610 pod_ready.go:104] pod "coredns-7d764666f9-klbbk" is not "Ready", error: <nil>
	I0111 09:06:57.857332  777610 pod_ready.go:94] pod "coredns-7d764666f9-klbbk" is "Ready"
	I0111 09:06:57.857363  777610 pod_ready.go:86] duration metric: took 34.506045716s for pod "coredns-7d764666f9-klbbk" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.860182  777610 pod_ready.go:83] waiting for pod "etcd-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.864843  777610 pod_ready.go:94] pod "etcd-no-preload-236664" is "Ready"
	I0111 09:06:57.864867  777610 pod_ready.go:86] duration metric: took 4.60582ms for pod "etcd-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.867526  777610 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.873042  777610 pod_ready.go:94] pod "kube-apiserver-no-preload-236664" is "Ready"
	I0111 09:06:57.873080  777610 pod_ready.go:86] duration metric: took 5.510783ms for pod "kube-apiserver-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:57.875545  777610 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:58.055223  777610 pod_ready.go:94] pod "kube-controller-manager-no-preload-236664" is "Ready"
	I0111 09:06:58.055254  777610 pod_ready.go:86] duration metric: took 179.68366ms for pod "kube-controller-manager-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:58.255642  777610 pod_ready.go:83] waiting for pod "kube-proxy-fzn6d" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:58.654621  777610 pod_ready.go:94] pod "kube-proxy-fzn6d" is "Ready"
	I0111 09:06:58.654649  777610 pod_ready.go:86] duration metric: took 398.981837ms for pod "kube-proxy-fzn6d" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:58.854840  777610 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:59.254702  777610 pod_ready.go:94] pod "kube-scheduler-no-preload-236664" is "Ready"
	I0111 09:06:59.254730  777610 pod_ready.go:86] duration metric: took 399.86175ms for pod "kube-scheduler-no-preload-236664" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:06:59.254744  777610 pod_ready.go:40] duration metric: took 35.907450517s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:06:59.307559  777610 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:06:59.310612  777610 out.go:203] 
	W0111 09:06:59.313530  777610 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:06:59.316425  777610 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:06:59.319364  777610 out.go:179] * Done! kubectl is now configured to use "no-preload-236664" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 09:06:53 no-preload-236664 crio[662]: time="2026-01-11T09:06:53.311220492Z" level=info msg="Created container b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9: kube-system/storage-provisioner/storage-provisioner" id=ebe217bd-0cd2-4b30-832e-cb43fc74b887 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:06:53 no-preload-236664 crio[662]: time="2026-01-11T09:06:53.311876549Z" level=info msg="Starting container: b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9" id=534ca1c2-8bbe-4b9b-800f-6d4dace32268 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:06:53 no-preload-236664 crio[662]: time="2026-01-11T09:06:53.314650706Z" level=info msg="Started container" PID=1682 containerID=b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9 description=kube-system/storage-provisioner/storage-provisioner id=534ca1c2-8bbe-4b9b-800f-6d4dace32268 name=/runtime.v1.RuntimeService/StartContainer sandboxID=51ce52059db1ac19b4128087a5b0def4bfdc2945ccac9198d1f5c9d215aca5af
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.860525998Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.86056512Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.864929484Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.864964824Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.869488468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.869657726Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.869732049Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.87387012Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:07:02 no-preload-236664 crio[662]: time="2026-01-11T09:07:02.873901981Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.079355755Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b69e67ad-b359-4f49-9e4d-8daa22b16fca name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.080366623Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52baeebd-1f79-43d0-b7a5-a98e8f28f016 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.081339986Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr/dashboard-metrics-scraper" id=751d1495-73c4-49c1-8b84-46e51f1b217f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.081447753Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.088824644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.089360192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.106226673Z" level=info msg="Created container 84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr/dashboard-metrics-scraper" id=751d1495-73c4-49c1-8b84-46e51f1b217f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.106917431Z" level=info msg="Starting container: 84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9" id=76f7b439-6b14-4f97-8973-35059b9a45a0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.108588148Z" level=info msg="Started container" PID=1753 containerID=84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr/dashboard-metrics-scraper id=76f7b439-6b14-4f97-8973-35059b9a45a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b67f182aab2b6c812c49f71ba54cc2d60fb397ca9d7da7f2a774e834eab89423
	Jan 11 09:07:06 no-preload-236664 conmon[1751]: conmon 84bf236250d57bfed04d <ninfo>: container 1753 exited with status 1
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.320009652Z" level=info msg="Removing container: 3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780" id=42100e0f-d71f-4658-853d-49aeab12aa67 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.327846365Z" level=info msg="Error loading conmon cgroup of container 3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780: cgroup deleted" id=42100e0f-d71f-4658-853d-49aeab12aa67 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:07:06 no-preload-236664 crio[662]: time="2026-01-11T09:07:06.333339982Z" level=info msg="Removed container 3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr/dashboard-metrics-scraper" id=42100e0f-d71f-4658-853d-49aeab12aa67 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	84bf236250d57       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   b67f182aab2b6       dashboard-metrics-scraper-867fb5f87b-5wjzr   kubernetes-dashboard
	b066119e65c64       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago      Running             storage-provisioner         2                   51ce52059db1a       storage-provisioner                          kube-system
	5f9e5b6974dec       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago      Running             kubernetes-dashboard        0                   807bbdafb3bfa       kubernetes-dashboard-b84665fb8-s44cv         kubernetes-dashboard
	c9952368c5029       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   ddb76cb111d44       busybox                                      default
	6a2d81e48ccb6       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           53 seconds ago      Running             coredns                     1                   6d0c26a264b0b       coredns-7d764666f9-klbbk                     kube-system
	3ed4c1f24cb00       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   b1f22c4a19694       kindnet-qp4zr                                kube-system
	d42e646528fe4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago      Exited              storage-provisioner         1                   51ce52059db1a       storage-provisioner                          kube-system
	34a556d5cd8cc       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           53 seconds ago      Running             kube-proxy                  1                   ffb92520b4618       kube-proxy-fzn6d                             kube-system
	330e32f7eadb9       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           58 seconds ago      Running             kube-apiserver              1                   1a4feb40647e5       kube-apiserver-no-preload-236664             kube-system
	7df07d0005202       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           58 seconds ago      Running             etcd                        1                   0c2c7bb8d8576       etcd-no-preload-236664                       kube-system
	db3b7cd2ab7a3       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           58 seconds ago      Running             kube-controller-manager     1                   56f6791700f1a       kube-controller-manager-no-preload-236664    kube-system
	2e5ccb5388ffb       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           58 seconds ago      Running             kube-scheduler              1                   a36d2fc0f7f5c       kube-scheduler-no-preload-236664             kube-system
	
	
	==> coredns [6a2d81e48ccb6d3fbc670096e077e9460cb9fdaebb6524dc50b18ca4f7bdc024] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53298 - 60322 "HINFO IN 8896566559478051865.5938670181658848141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031339399s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-236664
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-236664
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=no-preload-236664
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_05_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:05:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-236664
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:07:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:07:02 +0000   Sun, 11 Jan 2026 09:05:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:07:02 +0000   Sun, 11 Jan 2026 09:05:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:07:02 +0000   Sun, 11 Jan 2026 09:05:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:07:02 +0000   Sun, 11 Jan 2026 09:05:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-236664
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                89f99f7b-845b-4e1b-9e20-91037b4226fe
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-klbbk                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-no-preload-236664                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         112s
	  kube-system                 kindnet-qp4zr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-236664              250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-236664     200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-fzn6d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-236664              100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-5wjzr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-s44cv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node no-preload-236664 event: Registered Node no-preload-236664 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node no-preload-236664 event: Registered Node no-preload-236664 in Controller
	
	
	==> dmesg <==
	[Jan11 08:32] overlayfs: idmapped layers are currently not supported
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7df07d00052022e60d6b9a41c00fa011c068566dbbd08a0a3c864f5b97024f9b] <==
	{"level":"info","ts":"2026-01-11T09:06:17.981820Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-11T09:06:17.981879Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T09:06:17.981944Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T09:06:18.015875Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:06:18.016147Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:06:18.063712Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T09:06:18.063759Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T09:06:18.118258Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:06:18.118304Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:06:18.118337Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:06:18.118348Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:06:18.118362Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:06:18.128175Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:06:18.128223Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:06:18.128244Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:06:18.128254Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:06:18.152955Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-236664 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:06:18.153001Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:06:18.153215Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:06:18.215088Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:06:18.217898Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T09:06:18.218047Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:06:18.218092Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:06:18.310221Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:06:18.323231Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:07:16 up  3:49,  0 user,  load average: 1.27, 1.38, 1.78
	Linux no-preload-236664 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3ed4c1f24cb00260799431425b62ddf25a672a12028fcd8996c2247b447e0b01] <==
	I0111 09:06:22.646924       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:06:22.647191       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:06:22.660448       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:06:22.660481       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:06:22.660500       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:06:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:06:22.853099       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:06:22.862257       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:06:22.939245       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:06:22.939396       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0111 09:06:52.853624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0111 09:06:52.938789       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0111 09:06:52.939774       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0111 09:06:52.939776       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0111 09:06:54.440235       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:06:54.440346       1 metrics.go:72] Registering metrics
	I0111 09:06:54.440617       1 controller.go:711] "Syncing nftables rules"
	I0111 09:07:02.853350       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:07:02.853410       1 main.go:301] handling current node
	I0111 09:07:12.858299       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:07:12.858332       1 main.go:301] handling current node
	
	
	==> kube-apiserver [330e32f7eadb9313968c7bb510089b7831588db3d8cf94a3fabbcbd17728ceb4] <==
	I0111 09:06:21.487691       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:21.491254       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 09:06:21.491778       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 09:06:21.497548       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 09:06:21.497669       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 09:06:21.497724       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 09:06:21.497758       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 09:06:21.497845       1 aggregator.go:187] initial CRD sync complete...
	I0111 09:06:21.497859       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 09:06:21.497865       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 09:06:21.497871       1 cache.go:39] Caches are synced for autoregister controller
	E0111 09:06:21.502918       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 09:06:21.547596       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:06:21.550946       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:06:22.115333       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:06:22.130548       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:06:22.180123       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:06:22.319945       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:06:22.423340       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:06:22.456837       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:06:22.711648       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.1.190"}
	I0111 09:06:22.727734       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.72.67"}
	I0111 09:06:24.978603       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:06:25.029699       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 09:06:25.287700       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [db3b7cd2ab7a3576a39c22e1ecfa88bcca60f27168a7647d118e735330714d86] <==
	I0111 09:06:24.617934       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-236664"
	I0111 09:06:24.617978       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0111 09:06:24.616171       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616127       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616134       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616142       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616148       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616155       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616160       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616166       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616177       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616226       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616182       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616188       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616194       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616200       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616205       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616211       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616216       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.616234       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.643060       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.713884       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.714962       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:24.714977       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:06:24.714983       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [34a556d5cd8cc4b1cc7da4d590e25b5f9036f3794393d4a77c3fd96b8e767c7d] <==
	I0111 09:06:22.915141       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:06:23.012012       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:06:23.114031       1 shared_informer.go:377] "Caches are synced"
	I0111 09:06:23.114067       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 09:06:23.114153       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:06:23.133586       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:06:23.133643       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:06:23.137374       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:06:23.137718       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:06:23.137819       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:06:23.141375       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:06:23.141458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:06:23.141769       1 config.go:200] "Starting service config controller"
	I0111 09:06:23.141816       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:06:23.142427       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:06:23.142480       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:06:23.142992       1 config.go:309] "Starting node config controller"
	I0111 09:06:23.143054       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:06:23.143084       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:06:23.241630       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:06:23.242847       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 09:06:23.242864       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2e5ccb5388ffb7117083cc27353adb4a2c137a7141f3cd18699f0c1f048c7e6a] <==
	I0111 09:06:20.049732       1 serving.go:386] Generated self-signed cert in-memory
	W0111 09:06:21.374970       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 09:06:21.375008       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:06:21.375018       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 09:06:21.375025       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 09:06:21.495054       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 09:06:21.495085       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:06:21.504773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 09:06:21.504888       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:06:21.504900       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:06:21.504915       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 09:06:21.605797       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:06:36 no-preload-236664 kubelet[783]: E0111 09:06:36.233788     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:06:36 no-preload-236664 kubelet[783]: E0111 09:06:36.234341     783 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-236664" containerName="etcd"
	Jan 11 09:06:43 no-preload-236664 kubelet[783]: E0111 09:06:43.033308     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:06:43 no-preload-236664 kubelet[783]: I0111 09:06:43.033359     783 scope.go:122] "RemoveContainer" containerID="18bd9ed562bc872a0050c6ce8f7560a0d32de2039ff3ebd11495fb45149c93ac"
	Jan 11 09:06:43 no-preload-236664 kubelet[783]: E0111 09:06:43.033536     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: E0111 09:06:45.079597     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: I0111 09:06:45.080166     783 scope.go:122] "RemoveContainer" containerID="18bd9ed562bc872a0050c6ce8f7560a0d32de2039ff3ebd11495fb45149c93ac"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: I0111 09:06:45.259999     783 scope.go:122] "RemoveContainer" containerID="18bd9ed562bc872a0050c6ce8f7560a0d32de2039ff3ebd11495fb45149c93ac"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: E0111 09:06:45.260796     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: I0111 09:06:45.260859     783 scope.go:122] "RemoveContainer" containerID="3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780"
	Jan 11 09:06:45 no-preload-236664 kubelet[783]: E0111 09:06:45.261286     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:06:53 no-preload-236664 kubelet[783]: E0111 09:06:53.032783     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:06:53 no-preload-236664 kubelet[783]: I0111 09:06:53.032837     783 scope.go:122] "RemoveContainer" containerID="3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780"
	Jan 11 09:06:53 no-preload-236664 kubelet[783]: E0111 09:06:53.033369     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:06:53 no-preload-236664 kubelet[783]: I0111 09:06:53.281949     783 scope.go:122] "RemoveContainer" containerID="d42e646528fe412e4b2f31ce0b419736e4a9a98cedde1b525ef43c4b84bdd437"
	Jan 11 09:06:57 no-preload-236664 kubelet[783]: E0111 09:06:57.613135     783 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-klbbk" containerName="coredns"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: E0111 09:07:06.078757     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: I0111 09:07:06.078802     783 scope.go:122] "RemoveContainer" containerID="3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: I0111 09:07:06.317845     783 scope.go:122] "RemoveContainer" containerID="3bffc4bebe9f6db1d8c8fcd039471535ee8dbaf922acfd6feb3e016172814780"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: E0111 09:07:06.318171     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" containerName="dashboard-metrics-scraper"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: I0111 09:07:06.318199     783 scope.go:122] "RemoveContainer" containerID="84bf236250d57bfed04de7336a9941a59f5c8caf324655276e90564d8c0ffbf9"
	Jan 11 09:07:06 no-preload-236664 kubelet[783]: E0111 09:07:06.318344     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-5wjzr_kubernetes-dashboard(2c60c45b-eedf-4622-99ef-f99267c56bc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-5wjzr" podUID="2c60c45b-eedf-4622-99ef-f99267c56bc1"
	Jan 11 09:07:11 no-preload-236664 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:07:11 no-preload-236664 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:07:11 no-preload-236664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5f9e5b6974decd32e8f4aa12c584d870ee987483bb8f4fc519b1b323595fa69b] <==
	2026/01/11 09:06:30 Using namespace: kubernetes-dashboard
	2026/01/11 09:06:30 Using in-cluster config to connect to apiserver
	2026/01/11 09:06:30 Using secret token for csrf signing
	2026/01/11 09:06:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 09:06:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 09:06:30 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 09:06:30 Generating JWE encryption key
	2026/01/11 09:06:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 09:06:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 09:06:30 Initializing JWE encryption key from synchronized object
	2026/01/11 09:06:30 Creating in-cluster Sidecar client
	2026/01/11 09:06:30 Serving insecurely on HTTP port: 9090
	2026/01/11 09:06:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:07:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:06:30 Starting overwatch
	
	
	==> storage-provisioner [b066119e65c645014df48492eae023f983096f10e5eea8c1372800164bafb2e9] <==
	I0111 09:06:53.330586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:06:53.343178       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:06:53.343225       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 09:06:53.345320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:06:56.800171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:01.060953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:04.659586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:07.714097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:10.736251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:10.743906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:07:10.744068       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:07:10.744307       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-236664_8be3d89b-ebb4-4d41-915c-20315b4b3f3d!
	I0111 09:07:10.744783       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa57bb8e-53f1-4eea-8701-651adbacd6ef", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-236664_8be3d89b-ebb4-4d41-915c-20315b4b3f3d became leader
	W0111 09:07:10.752023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:10.770522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:07:10.844982       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-236664_8be3d89b-ebb4-4d41-915c-20315b4b3f3d!
	W0111 09:07:12.775460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:12.781690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:14.785700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:07:14.792407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d42e646528fe412e4b2f31ce0b419736e4a9a98cedde1b525ef43c4b84bdd437] <==
	I0111 09:06:22.669086       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 09:06:52.671538       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-236664 -n no-preload-236664
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-236664 -n no-preload-236664: exit status 2 (376.761217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-236664 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (295.30107ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:08:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-630626 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-630626 describe deploy/metrics-server -n kube-system: exit status 1 (103.853513ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-630626 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-630626
helpers_test.go:244: (dbg) docker inspect embed-certs-630626:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b",
	        "Created": "2026-01-11T09:07:25.16144692Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 782157,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:07:25.238240451Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/hosts",
	        "LogPath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b-json.log",
	        "Name": "/embed-certs-630626",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-630626:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-630626",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b",
	                "LowerDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-630626",
	                "Source": "/var/lib/docker/volumes/embed-certs-630626/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-630626",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-630626",
	                "name.minikube.sigs.k8s.io": "embed-certs-630626",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27241d0df76669db9920266dff008ca60f6ad3cd5d8f83abca17493d393be94f",
	            "SandboxKey": "/var/run/docker/netns/27241d0df766",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-630626": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:9f:10:10:3b:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45ad769942edefa5685d287911d0a8d87021dd76ee2918e11cae91d80793b700",
	                    "EndpointID": "f95884f45b84f6084d9419f6b06089835a96faca905e21943aa7bc7977a6c307",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-630626",
	                        "25c377e6342a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-630626 -n embed-certs-630626
E0111 09:08:18.028856  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:08:18.034151  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:08:18.044425  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:08:18.064715  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:08:18.105015  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:08:18.187870  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-630626 logs -n 25
E0111 09:08:18.348489  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:08:18.669235  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:08:19.310176  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-630626 logs -n 25: (1.380182471s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-459267                                                                                                                                                                                                                        │ cert-options-459267          │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:02 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:02 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-931581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │                     │
	│ stop    │ -p old-k8s-version-931581 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-931581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:04 UTC │
	│ image   │ old-k8s-version-931581 image list --format=json                                                                                                                                                                                               │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ pause   │ -p old-k8s-version-931581 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │                     │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │                     │
	│ stop    │ -p no-preload-236664 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │ 11 Jan 26 09:06 UTC │
	│ addons  │ enable dashboard -p no-preload-236664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ image   │ no-preload-236664 image list --format=json                                                                                                                                                                                                    │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ pause   │ -p no-preload-236664 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:08 UTC │
	│ ssh     │ force-systemd-flag-630015 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p force-systemd-flag-630015                                                                                                                                                                                                                  │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p disable-driver-mounts-781777                                                                                                                                                                                                               │ disable-driver-mounts-781777 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:08:08
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:08:08.515468  785363 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:08:08.515605  785363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:08:08.515617  785363 out.go:374] Setting ErrFile to fd 2...
	I0111 09:08:08.515623  785363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:08:08.515894  785363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:08:08.516309  785363 out.go:368] Setting JSON to false
	I0111 09:08:08.517216  785363 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13838,"bootTime":1768108650,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:08:08.517294  785363 start.go:143] virtualization:  
	I0111 09:08:08.523300  785363 out.go:179] * [default-k8s-diff-port-588333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:08:08.526541  785363 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:08:08.526689  785363 notify.go:221] Checking for updates...
	I0111 09:08:08.532847  785363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:08:08.537038  785363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:08:08.541123  785363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:08:08.544593  785363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:08:08.548465  785363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:08:08.551839  785363 config.go:182] Loaded profile config "embed-certs-630626": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:08:08.551999  785363 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:08:08.595883  785363 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:08:08.596068  785363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:08:08.666331  785363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:08:08.652322916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:08:08.666450  785363 docker.go:319] overlay module found
	I0111 09:08:08.670224  785363 out.go:179] * Using the docker driver based on user configuration
	I0111 09:08:08.673183  785363 start.go:309] selected driver: docker
	I0111 09:08:08.673206  785363 start.go:928] validating driver "docker" against <nil>
	I0111 09:08:08.673221  785363 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:08:08.673994  785363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:08:08.738065  785363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:08:08.725680545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:08:08.738248  785363 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 09:08:08.738471  785363 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:08:08.741473  785363 out.go:179] * Using Docker driver with root privileges
	I0111 09:08:08.744281  785363 cni.go:84] Creating CNI manager for ""
	I0111 09:08:08.744352  785363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:08:08.744372  785363 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 09:08:08.744456  785363 start.go:353] cluster config:
	{Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:08:08.749386  785363 out.go:179] * Starting "default-k8s-diff-port-588333" primary control-plane node in "default-k8s-diff-port-588333" cluster
	I0111 09:08:08.752162  785363 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:08:08.755039  785363 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:08:08.758008  785363 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:08:08.758070  785363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:08:08.758084  785363 cache.go:65] Caching tarball of preloaded images
	I0111 09:08:08.758093  785363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:08:08.758230  785363 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:08:08.758242  785363 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 09:08:08.758350  785363 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/config.json ...
	I0111 09:08:08.758379  785363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/config.json: {Name:mk1856652bf1fc00ed33e30e451b49560c54e441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:08:08.781930  785363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:08:08.781955  785363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:08:08.781970  785363 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:08:08.782005  785363 start.go:360] acquireMachinesLock for default-k8s-diff-port-588333: {Name:mk6f824bc7ba249281d1a4e0d65911b4e29ac8d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:08:08.782113  785363 start.go:364] duration metric: took 87.46µs to acquireMachinesLock for "default-k8s-diff-port-588333"
	I0111 09:08:08.782221  785363 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:08:08.782335  785363 start.go:125] createHost starting for "" (driver="docker")
	I0111 09:08:08.785650  785363 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 09:08:08.785891  785363 start.go:159] libmachine.API.Create for "default-k8s-diff-port-588333" (driver="docker")
	I0111 09:08:08.785930  785363 client.go:173] LocalClient.Create starting
	I0111 09:08:08.786026  785363 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 09:08:08.786086  785363 main.go:144] libmachine: Decoding PEM data...
	I0111 09:08:08.786107  785363 main.go:144] libmachine: Parsing certificate...
	I0111 09:08:08.786223  785363 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 09:08:08.786261  785363 main.go:144] libmachine: Decoding PEM data...
	I0111 09:08:08.786275  785363 main.go:144] libmachine: Parsing certificate...
	I0111 09:08:08.786656  785363 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-588333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 09:08:08.803628  785363 cli_runner.go:211] docker network inspect default-k8s-diff-port-588333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 09:08:08.803725  785363 network_create.go:284] running [docker network inspect default-k8s-diff-port-588333] to gather additional debugging logs...
	I0111 09:08:08.803750  785363 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-588333
	W0111 09:08:08.820406  785363 cli_runner.go:211] docker network inspect default-k8s-diff-port-588333 returned with exit code 1
	I0111 09:08:08.820433  785363 network_create.go:287] error running [docker network inspect default-k8s-diff-port-588333]: docker network inspect default-k8s-diff-port-588333: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-588333 not found
	I0111 09:08:08.820445  785363 network_create.go:289] output of [docker network inspect default-k8s-diff-port-588333]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-588333 not found
	
	** /stderr **
	I0111 09:08:08.820541  785363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:08:08.838390  785363 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
	I0111 09:08:08.838783  785363 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461c1a9d970d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:7e:25:fe:d0:0d} reservation:<nil>}
	I0111 09:08:08.839134  785363 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38e10816f85 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:42:af:ae:32:ae} reservation:<nil>}
	I0111 09:08:08.839560  785363 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6e120}
	I0111 09:08:08.839597  785363 network_create.go:124] attempt to create docker network default-k8s-diff-port-588333 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0111 09:08:08.839655  785363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-588333 default-k8s-diff-port-588333
	I0111 09:08:08.910603  785363 network_create.go:108] docker network default-k8s-diff-port-588333 192.168.76.0/24 created
	I0111 09:08:08.910637  785363 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-588333" container
	I0111 09:08:08.910739  785363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 09:08:08.927524  785363 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-588333 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-588333 --label created_by.minikube.sigs.k8s.io=true
	I0111 09:08:08.955913  785363 oci.go:103] Successfully created a docker volume default-k8s-diff-port-588333
	I0111 09:08:08.955998  785363 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-588333-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-588333 --entrypoint /usr/bin/test -v default-k8s-diff-port-588333:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 09:08:09.518578  785363 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-588333
	I0111 09:08:09.518647  785363 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:08:09.518662  785363 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 09:08:09.518754  785363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-588333:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Jan 11 09:08:05 embed-certs-630626 crio[834]: time="2026-01-11T09:08:05.627042902Z" level=info msg="Created container 76c609490b90b9109e5d22dd6ea183c085d540fc14d27b7e3b7501205e113eea: kube-system/coredns-7d764666f9-x5tzj/coredns" id=d05b97c3-4c2b-4b7b-a858-b1258bf31b8f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:08:05 embed-certs-630626 crio[834]: time="2026-01-11T09:08:05.627973047Z" level=info msg="Starting container: 76c609490b90b9109e5d22dd6ea183c085d540fc14d27b7e3b7501205e113eea" id=ca912f3a-f735-49e0-8917-817a9fe9b4d0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:08:05 embed-certs-630626 crio[834]: time="2026-01-11T09:08:05.629662013Z" level=info msg="Started container" PID=1772 containerID=76c609490b90b9109e5d22dd6ea183c085d540fc14d27b7e3b7501205e113eea description=kube-system/coredns-7d764666f9-x5tzj/coredns id=ca912f3a-f735-49e0-8917-817a9fe9b4d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e4ae1d87e8ee5c3794024103f44dcb162d1ca6d9840e9fd138b95e48be938721
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.535473881Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8788d529-6cb2-4f54-8453-b0237acb2bb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.535566313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.55088413Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:31ce8c14172c816b7acceb26c85b698c08552bec86261a422a9ac3c65f1cf4f6 UID:0d555ec4-fa89-4024-98df-7787a1b7c069 NetNS:/var/run/netns/a92d581e-3704-4f89-a7fa-1147c40b7068 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400167a0c8}] Aliases:map[]}"
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.550928438Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.579124608Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:31ce8c14172c816b7acceb26c85b698c08552bec86261a422a9ac3c65f1cf4f6 UID:0d555ec4-fa89-4024-98df-7787a1b7c069 NetNS:/var/run/netns/a92d581e-3704-4f89-a7fa-1147c40b7068 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400167a0c8}] Aliases:map[]}"
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.579458225Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.587541841Z" level=info msg="Ran pod sandbox 31ce8c14172c816b7acceb26c85b698c08552bec86261a422a9ac3c65f1cf4f6 with infra container: default/busybox/POD" id=8788d529-6cb2-4f54-8453-b0237acb2bb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.589048814Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a74495f9-80b3-4d67-8b86-23523ba2cd29 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.589326036Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a74495f9-80b3-4d67-8b86-23523ba2cd29 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.589513706Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a74495f9-80b3-4d67-8b86-23523ba2cd29 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.590374091Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=76eace82-c2d5-45f5-98a5-f396d8f1d22f name=/runtime.v1.ImageService/PullImage
	Jan 11 09:08:08 embed-certs-630626 crio[834]: time="2026-01-11T09:08:08.593142865Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.760941134Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=76eace82-c2d5-45f5-98a5-f396d8f1d22f name=/runtime.v1.ImageService/PullImage
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.761983387Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d0e1a799-5a82-443d-9a8f-31cb49c14784 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.763823542Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d388c335-b683-420d-97ac-1e5b99a9050c name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.769327711Z" level=info msg="Creating container: default/busybox/busybox" id=e484e835-e33e-490b-a567-171e36fa54d2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.769444422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.7780952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.77864084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.795874801Z" level=info msg="Created container 809df00e3e62f52755f5912ec922f9b7d229d9f9ab7e1554f0b84aa81b7702dd: default/busybox/busybox" id=e484e835-e33e-490b-a567-171e36fa54d2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.79799025Z" level=info msg="Starting container: 809df00e3e62f52755f5912ec922f9b7d229d9f9ab7e1554f0b84aa81b7702dd" id=7ecd0901-92d7-4802-a501-93b730ee6213 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:08:10 embed-certs-630626 crio[834]: time="2026-01-11T09:08:10.801037838Z" level=info msg="Started container" PID=1831 containerID=809df00e3e62f52755f5912ec922f9b7d229d9f9ab7e1554f0b84aa81b7702dd description=default/busybox/busybox id=7ecd0901-92d7-4802-a501-93b730ee6213 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31ce8c14172c816b7acceb26c85b698c08552bec86261a422a9ac3c65f1cf4f6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	809df00e3e62f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   31ce8c14172c8       busybox                                      default
	76c609490b90b       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      13 seconds ago      Running             coredns                   0                   e4ae1d87e8ee5       coredns-7d764666f9-x5tzj                     kube-system
	5d70bf73247f2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   b2c17a1a49470       storage-provisioner                          kube-system
	c2b6d3b4d6243       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   00d54f3c46bbf       kindnet-w5nb5                                kube-system
	68027af381a8b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      26 seconds ago      Running             kube-proxy                0                   6db351144bc63       kube-proxy-7xnsq                             kube-system
	973891c766c50       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      37 seconds ago      Running             kube-controller-manager   0                   6163275771fcb       kube-controller-manager-embed-certs-630626   kube-system
	df724791d12de       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      37 seconds ago      Running             kube-apiserver            0                   4ac44db6a4a60       kube-apiserver-embed-certs-630626            kube-system
	789f3897a97bb       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      37 seconds ago      Running             kube-scheduler            0                   97c82a836f9d7       kube-scheduler-embed-certs-630626            kube-system
	cd3344e03e8d9       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      37 seconds ago      Running             etcd                      0                   b1ccf25f3006d       etcd-embed-certs-630626                      kube-system
	
	
	==> coredns [76c609490b90b9109e5d22dd6ea183c085d540fc14d27b7e3b7501205e113eea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38445 - 18182 "HINFO IN 6814936842978967872.8449521692851171126. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024298889s
	
	
	==> describe nodes <==
	Name:               embed-certs-630626
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-630626
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=embed-certs-630626
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_07_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-630626
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:08:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:08:17 +0000   Sun, 11 Jan 2026 09:07:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:08:17 +0000   Sun, 11 Jan 2026 09:07:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:08:17 +0000   Sun, 11 Jan 2026 09:07:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:08:17 +0000   Sun, 11 Jan 2026 09:08:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-630626
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                c5657d65-a5db-44ef-92ca-1ef6faf268e8
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-x5tzj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-embed-certs-630626                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-w5nb5                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-630626             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-630626    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-7xnsq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-630626             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node embed-certs-630626 event: Registered Node embed-certs-630626 in Controller
	
	
	==> dmesg <==
	[Jan11 08:35] overlayfs: idmapped layers are currently not supported
	[Jan11 08:36] overlayfs: idmapped layers are currently not supported
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cd3344e03e8d90fba4bbead113193c632dbca335a4907ac3022d7e47b89abd5d] <==
	{"level":"info","ts":"2026-01-11T09:07:41.226911Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:07:41.969823Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-11T09:07:41.969874Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T09:07:41.969924Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-11T09:07:41.969955Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:07:41.969974Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:07:41.974196Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:07:41.974244Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:07:41.974284Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-11T09:07:41.974300Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:07:41.980036Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:07:41.986385Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-630626 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:07:41.986430Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:07:41.987635Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:07:41.987783Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:07:41.987920Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:07:41.987975Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:07:41.988019Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T09:07:41.988088Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T09:07:41.988129Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:07:41.988143Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:07:41.989596Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:07:41.991914Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:07:41.993582Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:07:41.994585Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 09:08:19 up  3:50,  0 user,  load average: 2.04, 1.58, 1.82
	Linux embed-certs-630626 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c2b6d3b4d6243579442f0f2340d555831a6164411cd52aefcfb50e0722efb236] <==
	I0111 09:07:54.743571       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:07:54.743895       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:07:54.744028       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:07:54.744047       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:07:54.744062       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:07:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:07:54.946527       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:07:54.946607       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:07:54.946642       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:07:54.947596       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 09:07:55.238224       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:07:55.238348       1 metrics.go:72] Registering metrics
	I0111 09:07:55.238484       1 controller.go:711] "Syncing nftables rules"
	I0111 09:08:04.953050       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:08:04.953196       1 main.go:301] handling current node
	I0111 09:08:14.948925       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:08:14.948963       1 main.go:301] handling current node
	
	
	==> kube-apiserver [df724791d12deb80c23fb837cb0b530da9681ad3791dff8b4ce7178ad7a7459c] <==
	E0111 09:07:44.106910       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	E0111 09:07:44.114523       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0111 09:07:44.135655       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0111 09:07:44.135895       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:07:44.140203       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:07:44.151827       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:07:44.321152       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:07:44.840957       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 09:07:44.847083       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 09:07:44.847112       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:07:45.704625       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:07:45.762231       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:07:45.866350       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 09:07:45.873816       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0111 09:07:45.875055       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 09:07:45.879680       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:07:45.956151       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:07:46.770273       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:07:46.788912       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 09:07:46.805220       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 09:07:51.760532       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 09:07:51.811004       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:07:51.816138       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:07:51.961114       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0111 09:08:17.341600       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35166: use of closed network connection
	
	
	==> kube-controller-manager [973891c766c50c336bd141c53d67fa7ea0586c1e8c410780849756f3d9a63449] <==
	I0111 09:07:50.763175       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.763183       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.763233       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.763239       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.763836       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.763846       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.779820       1 range_allocator.go:177] "Sending events to api server"
	I0111 09:07:50.779870       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0111 09:07:50.779878       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:07:50.779884       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.763853       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.763010       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.763018       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.763029       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.778023       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.778734       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.785647       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:07:50.778745       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.804318       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.804914       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-630626" podCIDRs=["10.244.0.0/24"]
	I0111 09:07:50.863972       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:50.863992       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:07:50.863998       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:07:50.886780       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:05.768154       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [68027af381a8b972d20030e7f21dbb835d6c60492a0cca9093e40ea865e5051f] <==
	I0111 09:07:52.590967       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:07:52.671552       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:07:52.772323       1 shared_informer.go:377] "Caches are synced"
	I0111 09:07:52.772364       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 09:07:52.772439       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:07:52.838749       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:07:52.838805       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:07:52.848630       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:07:52.849085       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:07:52.849112       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:07:52.851557       1 config.go:200] "Starting service config controller"
	I0111 09:07:52.851576       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:07:52.852519       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:07:52.852532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:07:52.852554       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:07:52.852558       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:07:52.853261       1 config.go:309] "Starting node config controller"
	I0111 09:07:52.853269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:07:52.853275       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:07:52.952728       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 09:07:52.952757       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:07:52.952809       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [789f3897a97bb7cbb8656755bb21c7445a34549d286bbb65f0c8521ecb1e8be3] <==
	E0111 09:07:44.055445       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 09:07:44.055483       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 09:07:44.055607       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 09:07:44.055656       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 09:07:44.055744       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 09:07:44.055779       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 09:07:44.055808       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 09:07:44.055844       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 09:07:44.875585       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 09:07:44.895277       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 09:07:44.932219       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 09:07:44.934948       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 09:07:44.935459       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 09:07:44.945683       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 09:07:44.976973       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 09:07:45.001773       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 09:07:45.087276       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0111 09:07:45.129568       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 09:07:45.236194       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 09:07:45.349055       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 09:07:45.364832       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 09:07:45.377421       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 09:07:45.432606       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 09:07:45.433959       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I0111 09:07:48.030292       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:07:52 embed-certs-630626 kubelet[1296]: I0111 09:07:52.067915    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phtdf\" (UniqueName: \"kubernetes.io/projected/870f58a8-0905-4b5e-aa6a-b6c96819a399-kube-api-access-phtdf\") pod \"kube-proxy-7xnsq\" (UID: \"870f58a8-0905-4b5e-aa6a-b6c96819a399\") " pod="kube-system/kube-proxy-7xnsq"
	Jan 11 09:07:52 embed-certs-630626 kubelet[1296]: E0111 09:07:52.171667    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-630626" containerName="etcd"
	Jan 11 09:07:52 embed-certs-630626 kubelet[1296]: I0111 09:07:52.235194    1296 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 11 09:07:52 embed-certs-630626 kubelet[1296]: W0111 09:07:52.336867    1296 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/crio-6db351144bc639f3076d95fd2a4f5649338a51bd3ae4165637b85046892e4c29 WatchSource:0}: Error finding container 6db351144bc639f3076d95fd2a4f5649338a51bd3ae4165637b85046892e4c29: Status 404 returned error can't find the container with id 6db351144bc639f3076d95fd2a4f5649338a51bd3ae4165637b85046892e4c29
	Jan 11 09:07:53 embed-certs-630626 kubelet[1296]: E0111 09:07:53.671403    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-630626" containerName="kube-apiserver"
	Jan 11 09:07:53 embed-certs-630626 kubelet[1296]: I0111 09:07:53.689914    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-7xnsq" podStartSLOduration=2.689897023 podStartE2EDuration="2.689897023s" podCreationTimestamp="2026-01-11 09:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:07:52.920551327 +0000 UTC m=+6.311758580" watchObservedRunningTime="2026-01-11 09:07:53.689897023 +0000 UTC m=+7.081104235"
	Jan 11 09:07:54 embed-certs-630626 kubelet[1296]: E0111 09:07:54.082328    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-630626" containerName="kube-scheduler"
	Jan 11 09:07:56 embed-certs-630626 kubelet[1296]: E0111 09:07:56.006351    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-630626" containerName="kube-controller-manager"
	Jan 11 09:07:56 embed-certs-630626 kubelet[1296]: I0111 09:07:56.022670    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-w5nb5" podStartSLOduration=2.788928229 podStartE2EDuration="5.022641659s" podCreationTimestamp="2026-01-11 09:07:51 +0000 UTC" firstStartedPulling="2026-01-11 09:07:52.406977588 +0000 UTC m=+5.798184792" lastFinishedPulling="2026-01-11 09:07:54.64069101 +0000 UTC m=+8.031898222" observedRunningTime="2026-01-11 09:07:54.917413209 +0000 UTC m=+8.308620413" watchObservedRunningTime="2026-01-11 09:07:56.022641659 +0000 UTC m=+9.413848863"
	Jan 11 09:08:02 embed-certs-630626 kubelet[1296]: E0111 09:08:02.172624    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-630626" containerName="etcd"
	Jan 11 09:08:03 embed-certs-630626 kubelet[1296]: E0111 09:08:03.682099    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-630626" containerName="kube-apiserver"
	Jan 11 09:08:04 embed-certs-630626 kubelet[1296]: E0111 09:08:04.091231    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-630626" containerName="kube-scheduler"
	Jan 11 09:08:05 embed-certs-630626 kubelet[1296]: I0111 09:08:05.122812    1296 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 11 09:08:05 embed-certs-630626 kubelet[1296]: I0111 09:08:05.282364    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gz7z\" (UniqueName: \"kubernetes.io/projected/d5ac437b-5119-4b10-9625-81ff88fee999-kube-api-access-5gz7z\") pod \"storage-provisioner\" (UID: \"d5ac437b-5119-4b10-9625-81ff88fee999\") " pod="kube-system/storage-provisioner"
	Jan 11 09:08:05 embed-certs-630626 kubelet[1296]: I0111 09:08:05.282425    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6885d7ae-0fc5-41e2-b94f-baff32115d85-config-volume\") pod \"coredns-7d764666f9-x5tzj\" (UID: \"6885d7ae-0fc5-41e2-b94f-baff32115d85\") " pod="kube-system/coredns-7d764666f9-x5tzj"
	Jan 11 09:08:05 embed-certs-630626 kubelet[1296]: I0111 09:08:05.282447    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rngc6\" (UniqueName: \"kubernetes.io/projected/6885d7ae-0fc5-41e2-b94f-baff32115d85-kube-api-access-rngc6\") pod \"coredns-7d764666f9-x5tzj\" (UID: \"6885d7ae-0fc5-41e2-b94f-baff32115d85\") " pod="kube-system/coredns-7d764666f9-x5tzj"
	Jan 11 09:08:05 embed-certs-630626 kubelet[1296]: I0111 09:08:05.282473    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d5ac437b-5119-4b10-9625-81ff88fee999-tmp\") pod \"storage-provisioner\" (UID: \"d5ac437b-5119-4b10-9625-81ff88fee999\") " pod="kube-system/storage-provisioner"
	Jan 11 09:08:05 embed-certs-630626 kubelet[1296]: E0111 09:08:05.930608    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-x5tzj" containerName="coredns"
	Jan 11 09:08:05 embed-certs-630626 kubelet[1296]: I0111 09:08:05.981244    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-x5tzj" podStartSLOduration=13.981227233 podStartE2EDuration="13.981227233s" podCreationTimestamp="2026-01-11 09:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:08:05.955823091 +0000 UTC m=+19.347030295" watchObservedRunningTime="2026-01-11 09:08:05.981227233 +0000 UTC m=+19.372434437"
	Jan 11 09:08:06 embed-certs-630626 kubelet[1296]: I0111 09:08:06.001708    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.001689224 podStartE2EDuration="13.001689224s" podCreationTimestamp="2026-01-11 09:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:08:05.981733283 +0000 UTC m=+19.372940495" watchObservedRunningTime="2026-01-11 09:08:06.001689224 +0000 UTC m=+19.392896436"
	Jan 11 09:08:06 embed-certs-630626 kubelet[1296]: E0111 09:08:06.032748    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-630626" containerName="kube-controller-manager"
	Jan 11 09:08:06 embed-certs-630626 kubelet[1296]: E0111 09:08:06.935451    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-x5tzj" containerName="coredns"
	Jan 11 09:08:07 embed-certs-630626 kubelet[1296]: E0111 09:08:07.949087    1296 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-x5tzj" containerName="coredns"
	Jan 11 09:08:08 embed-certs-630626 kubelet[1296]: I0111 09:08:08.310634    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t8pp\" (UniqueName: \"kubernetes.io/projected/0d555ec4-fa89-4024-98df-7787a1b7c069-kube-api-access-7t8pp\") pod \"busybox\" (UID: \"0d555ec4-fa89-4024-98df-7787a1b7c069\") " pod="default/busybox"
	Jan 11 09:08:08 embed-certs-630626 kubelet[1296]: W0111 09:08:08.585880    1296 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/crio-31ce8c14172c816b7acceb26c85b698c08552bec86261a422a9ac3c65f1cf4f6 WatchSource:0}: Error finding container 31ce8c14172c816b7acceb26c85b698c08552bec86261a422a9ac3c65f1cf4f6: Status 404 returned error can't find the container with id 31ce8c14172c816b7acceb26c85b698c08552bec86261a422a9ac3c65f1cf4f6
	
	
	==> storage-provisioner [5d70bf73247f2cc742018feb5ee3719149ca9de8e7943f087b97683f1ddcab5c] <==
	I0111 09:08:05.590367       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:08:05.625961       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:08:05.626022       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 09:08:05.633386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:05.654309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:08:05.654591       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:08:05.659168       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-630626_f8514a55-b2b8-426c-b330-2536d19dd0a7!
	I0111 09:08:05.667384       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e8d4ca1-b478-4fe9-ac57-5e4f0fb583ee", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-630626_f8514a55-b2b8-426c-b330-2536d19dd0a7 became leader
	W0111 09:08:05.669500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:05.683602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:08:05.762249       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-630626_f8514a55-b2b8-426c-b330-2536d19dd0a7!
	W0111 09:08:07.687396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:07.693148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:09.696367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:09.701409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:11.705906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:11.725761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:13.728739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:13.735054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:15.738390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:15.745464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:17.748134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:17.760688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-630626 -n embed-certs-630626
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-630626 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-588333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-588333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (287.062003ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:09:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-588333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-588333 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-588333 describe deploy/metrics-server -n kube-system: exit status 1 (87.068237ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-588333 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-588333
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-588333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f",
	        "Created": "2026-01-11T09:08:13.612670128Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 785804,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:08:13.669819606Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/hosts",
	        "LogPath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f-json.log",
	        "Name": "/default-k8s-diff-port-588333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-588333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-588333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f",
	                "LowerDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-588333",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-588333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-588333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-588333",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-588333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0c1c34c80b7dc9374022677e7626b18f34b7057645f4dbfd075031640b8d083",
	            "SandboxKey": "/var/run/docker/netns/c0c1c34c80b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-588333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:7f:62:be:b2:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa19db219143297e6d2133400cad3ab3e7355f9d99472fad6a65d0a14f403a70",
	                    "EndpointID": "41dc8c06d46763402e90ebd0acd202ebfe8a09fb253e93a5afb1348dd8a156aa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-588333",
	                        "ed1214141656"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-588333 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-588333 logs -n 25: (1.288782703s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-931581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:03 UTC │
	│ start   │ -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:03 UTC │ 11 Jan 26 09:04 UTC │
	│ image   │ old-k8s-version-931581 image list --format=json                                                                                                                                                                                               │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ pause   │ -p old-k8s-version-931581 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │                     │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                                                                                     │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │                     │
	│ stop    │ -p no-preload-236664 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │ 11 Jan 26 09:06 UTC │
	│ addons  │ enable dashboard -p no-preload-236664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ image   │ no-preload-236664 image list --format=json                                                                                                                                                                                                    │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ pause   │ -p no-preload-236664 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:08 UTC │
	│ ssh     │ force-systemd-flag-630015 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p force-systemd-flag-630015                                                                                                                                                                                                                  │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p disable-driver-mounts-781777                                                                                                                                                                                                               │ disable-driver-mounts-781777 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │                     │
	│ stop    │ -p embed-certs-630626 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-630626 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-588333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:08:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:08:32.716404  788146 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:08:32.716515  788146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:08:32.716558  788146 out.go:374] Setting ErrFile to fd 2...
	I0111 09:08:32.716564  788146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:08:32.716837  788146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:08:32.717228  788146 out.go:368] Setting JSON to false
	I0111 09:08:32.718176  788146 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13863,"bootTime":1768108650,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:08:32.718246  788146 start.go:143] virtualization:  
	I0111 09:08:32.721766  788146 out.go:179] * [embed-certs-630626] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:08:32.724720  788146 notify.go:221] Checking for updates...
	I0111 09:08:32.724684  788146 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:08:32.728489  788146 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:08:32.731307  788146 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:08:32.734197  788146 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:08:32.736996  788146 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:08:32.739942  788146 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:08:32.743404  788146 config.go:182] Loaded profile config "embed-certs-630626": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:08:32.744039  788146 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:08:32.795828  788146 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:08:32.795935  788146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:08:32.877610  788146 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 09:08:32.867016307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:08:32.877722  788146 docker.go:319] overlay module found
	I0111 09:08:32.881353  788146 out.go:179] * Using the docker driver based on existing profile
	I0111 09:08:32.884190  788146 start.go:309] selected driver: docker
	I0111 09:08:32.884218  788146 start.go:928] validating driver "docker" against &{Name:embed-certs-630626 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:08:32.884313  788146 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:08:32.885006  788146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:08:32.975019  788146 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 09:08:32.964629074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:08:32.975339  788146 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:08:32.975363  788146 cni.go:84] Creating CNI manager for ""
	I0111 09:08:32.975414  788146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:08:32.975450  788146 start.go:353] cluster config:
	{Name:embed-certs-630626 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:08:32.978664  788146 out.go:179] * Starting "embed-certs-630626" primary control-plane node in "embed-certs-630626" cluster
	I0111 09:08:32.981458  788146 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:08:32.984465  788146 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:08:32.987439  788146 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:08:32.987486  788146 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:08:32.987492  788146 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:08:32.987497  788146 cache.go:65] Caching tarball of preloaded images
	I0111 09:08:32.987585  788146 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:08:32.987595  788146 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 09:08:32.987704  788146 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/config.json ...
	I0111 09:08:33.014770  788146 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:08:33.014794  788146 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:08:33.014816  788146 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:08:33.014856  788146 start.go:360] acquireMachinesLock for embed-certs-630626: {Name:mkd95b5b6f25655182ae68d0dfec1c5695a6e23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:08:33.014923  788146 start.go:364] duration metric: took 43.553µs to acquireMachinesLock for "embed-certs-630626"
	I0111 09:08:33.014955  788146 start.go:96] Skipping create...Using existing machine configuration
	I0111 09:08:33.015002  788146 fix.go:54] fixHost starting: 
	I0111 09:08:33.015270  788146 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:08:33.034361  788146 fix.go:112] recreateIfNeeded on embed-certs-630626: state=Stopped err=<nil>
	W0111 09:08:33.034413  788146 fix.go:138] unexpected machine state, will restart: <nil>
	I0111 09:08:29.840522  785363 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.512637523s
	I0111 09:08:31.527896  785363 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.200451869s
	I0111 09:08:33.829170  785363 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501660213s
	I0111 09:08:33.893264  785363 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 09:08:33.915614  785363 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 09:08:33.938765  785363 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 09:08:33.939236  785363 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-588333 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 09:08:33.955543  785363 kubeadm.go:319] [bootstrap-token] Using token: nqwtk4.t8556q6j5q5eskey
	I0111 09:08:33.958386  785363 out.go:252]   - Configuring RBAC rules ...
	I0111 09:08:33.958505  785363 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 09:08:33.967633  785363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 09:08:33.980260  785363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 09:08:33.985246  785363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 09:08:33.991350  785363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 09:08:33.996078  785363 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 09:08:34.236276  785363 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 09:08:34.748017  785363 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 09:08:35.237612  785363 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 09:08:35.238999  785363 kubeadm.go:319] 
	I0111 09:08:35.239078  785363 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 09:08:35.239084  785363 kubeadm.go:319] 
	I0111 09:08:35.239179  785363 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 09:08:35.239185  785363 kubeadm.go:319] 
	I0111 09:08:35.239210  785363 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 09:08:35.239269  785363 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 09:08:35.239320  785363 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 09:08:35.239324  785363 kubeadm.go:319] 
	I0111 09:08:35.239378  785363 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 09:08:35.239382  785363 kubeadm.go:319] 
	I0111 09:08:35.239429  785363 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 09:08:35.239432  785363 kubeadm.go:319] 
	I0111 09:08:35.239484  785363 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 09:08:35.239559  785363 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 09:08:35.239627  785363 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 09:08:35.239631  785363 kubeadm.go:319] 
	I0111 09:08:35.239715  785363 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 09:08:35.239791  785363 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 09:08:35.239795  785363 kubeadm.go:319] 
	I0111 09:08:35.239879  785363 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token nqwtk4.t8556q6j5q5eskey \
	I0111 09:08:35.239983  785363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb \
	I0111 09:08:35.240003  785363 kubeadm.go:319] 	--control-plane 
	I0111 09:08:35.240006  785363 kubeadm.go:319] 
	I0111 09:08:35.240091  785363 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 09:08:35.240095  785363 kubeadm.go:319] 
	I0111 09:08:35.240176  785363 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token nqwtk4.t8556q6j5q5eskey \
	I0111 09:08:35.240278  785363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb 
	I0111 09:08:35.245247  785363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:08:35.245662  785363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:08:35.245768  785363 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:08:35.245787  785363 cni.go:84] Creating CNI manager for ""
	I0111 09:08:35.245795  785363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:08:35.249010  785363 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 09:08:33.038061  788146 out.go:252] * Restarting existing docker container for "embed-certs-630626" ...
	I0111 09:08:33.038244  788146 cli_runner.go:164] Run: docker start embed-certs-630626
	I0111 09:08:33.344886  788146 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:08:33.370197  788146 kic.go:430] container "embed-certs-630626" state is running.
	I0111 09:08:33.370787  788146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-630626
	I0111 09:08:33.404611  788146 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/config.json ...
	I0111 09:08:33.404833  788146 machine.go:94] provisionDockerMachine start ...
	I0111 09:08:33.404889  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:33.438406  788146 main.go:144] libmachine: Using SSH client type: native
	I0111 09:08:33.438771  788146 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33813 <nil> <nil>}
	I0111 09:08:33.438781  788146 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:08:33.439302  788146 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33850->127.0.0.1:33813: read: connection reset by peer
	I0111 09:08:36.589607  788146 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-630626
	
	I0111 09:08:36.589640  788146 ubuntu.go:182] provisioning hostname "embed-certs-630626"
	I0111 09:08:36.589710  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:36.608660  788146 main.go:144] libmachine: Using SSH client type: native
	I0111 09:08:36.608977  788146 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33813 <nil> <nil>}
	I0111 09:08:36.608996  788146 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-630626 && echo "embed-certs-630626" | sudo tee /etc/hostname
	I0111 09:08:36.781559  788146 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-630626
	
	I0111 09:08:36.781696  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:36.811641  788146 main.go:144] libmachine: Using SSH client type: native
	I0111 09:08:36.811979  788146 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33813 <nil> <nil>}
	I0111 09:08:36.811996  788146 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-630626' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-630626/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-630626' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:08:36.978804  788146 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:08:36.978849  788146 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:08:36.978874  788146 ubuntu.go:190] setting up certificates
	I0111 09:08:36.978883  788146 provision.go:84] configureAuth start
	I0111 09:08:36.978953  788146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-630626
	I0111 09:08:36.995783  788146 provision.go:143] copyHostCerts
	I0111 09:08:36.995850  788146 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:08:36.995868  788146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:08:36.995953  788146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:08:36.996058  788146 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:08:36.996070  788146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:08:36.996099  788146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:08:36.996164  788146 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:08:36.996172  788146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:08:36.996197  788146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:08:36.996251  788146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.embed-certs-630626 san=[127.0.0.1 192.168.85.2 embed-certs-630626 localhost minikube]
	I0111 09:08:37.147261  788146 provision.go:177] copyRemoteCerts
	I0111 09:08:37.147338  788146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:08:37.147382  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:37.166038  788146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:08:37.271471  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:08:37.293885  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 09:08:37.327073  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 09:08:37.354795  788146 provision.go:87] duration metric: took 375.889133ms to configureAuth
	I0111 09:08:37.354833  788146 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:08:37.355030  788146 config.go:182] Loaded profile config "embed-certs-630626": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:08:37.355149  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:37.374950  788146 main.go:144] libmachine: Using SSH client type: native
	I0111 09:08:37.375266  788146 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33813 <nil> <nil>}
	I0111 09:08:37.375282  788146 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:08:35.251902  785363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 09:08:35.255982  785363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 09:08:35.256010  785363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 09:08:35.269812  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 09:08:35.568466  785363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 09:08:35.568595  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:35.568679  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-588333 minikube.k8s.io/updated_at=2026_01_11T09_08_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=default-k8s-diff-port-588333 minikube.k8s.io/primary=true
	I0111 09:08:35.763700  785363 ops.go:34] apiserver oom_adj: -16
	I0111 09:08:35.763818  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:36.264726  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:36.764332  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:37.264072  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:37.763908  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:38.264483  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:37.736531  788146 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:08:37.736557  788146 machine.go:97] duration metric: took 4.331714291s to provisionDockerMachine
	I0111 09:08:37.736569  788146 start.go:293] postStartSetup for "embed-certs-630626" (driver="docker")
	I0111 09:08:37.736580  788146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:08:37.736678  788146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:08:37.736735  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:37.766259  788146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:08:37.878950  788146 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:08:37.882504  788146 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:08:37.882531  788146 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:08:37.882542  788146 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:08:37.882604  788146 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:08:37.882685  788146 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:08:37.882798  788146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:08:37.890294  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:08:37.911534  788146 start.go:296] duration metric: took 174.949548ms for postStartSetup
	I0111 09:08:37.911616  788146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:08:37.911657  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:37.932441  788146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:08:38.035886  788146 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:08:38.041090  788146 fix.go:56] duration metric: took 5.026079692s for fixHost
	I0111 09:08:38.041113  788146 start.go:83] releasing machines lock for "embed-certs-630626", held for 5.026177638s
	I0111 09:08:38.041185  788146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-630626
	I0111 09:08:38.060945  788146 ssh_runner.go:195] Run: cat /version.json
	I0111 09:08:38.060972  788146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:08:38.060998  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:38.061030  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:38.081094  788146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:08:38.083904  788146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:08:38.288172  788146 ssh_runner.go:195] Run: systemctl --version
	I0111 09:08:38.296278  788146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:08:38.369958  788146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:08:38.375197  788146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:08:38.375301  788146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:08:38.383526  788146 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 09:08:38.383554  788146 start.go:496] detecting cgroup driver to use...
	I0111 09:08:38.383607  788146 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:08:38.383663  788146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:08:38.402094  788146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:08:38.417531  788146 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:08:38.417648  788146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:08:38.433878  788146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:08:38.447343  788146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:08:38.580895  788146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:08:38.710933  788146 docker.go:234] disabling docker service ...
	I0111 09:08:38.711053  788146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:08:38.725825  788146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:08:38.739379  788146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:08:38.885088  788146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:08:39.006351  788146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:08:39.020386  788146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:08:39.035124  788146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 09:08:39.035276  788146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:08:39.043845  788146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:08:39.043954  788146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:08:39.054472  788146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:08:39.069186  788146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:08:39.078281  788146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:08:39.086735  788146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:08:39.096841  788146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:08:39.108351  788146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:08:39.124904  788146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:08:39.134194  788146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:08:39.143038  788146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:08:39.299660  788146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:08:39.508183  788146 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:08:39.508305  788146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:08:39.513271  788146 start.go:574] Will wait 60s for crictl version
	I0111 09:08:39.513397  788146 ssh_runner.go:195] Run: which crictl
	I0111 09:08:39.517185  788146 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:08:39.546379  788146 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:08:39.546495  788146 ssh_runner.go:195] Run: crio --version
	I0111 09:08:39.589140  788146 ssh_runner.go:195] Run: crio --version
	I0111 09:08:39.626795  788146 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 09:08:38.763958  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:39.264130  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:39.764547  785363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:08:39.910795  785363 kubeadm.go:1114] duration metric: took 4.342248791s to wait for elevateKubeSystemPrivileges
	I0111 09:08:39.910832  785363 kubeadm.go:403] duration metric: took 16.812396497s to StartCluster
	I0111 09:08:39.910850  785363 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:08:39.910922  785363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:08:39.911536  785363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:08:39.911739  785363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:08:39.911842  785363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 09:08:39.912084  785363 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:08:39.912136  785363 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:08:39.912202  785363 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-588333"
	I0111 09:08:39.912216  785363 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-588333"
	I0111 09:08:39.912243  785363 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:08:39.912768  785363 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:08:39.913392  785363 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-588333"
	I0111 09:08:39.913418  785363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-588333"
	I0111 09:08:39.913694  785363 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:08:39.917875  785363 out.go:179] * Verifying Kubernetes components...
	I0111 09:08:39.920810  785363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:08:39.948560  785363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:08:39.629761  788146 cli_runner.go:164] Run: docker network inspect embed-certs-630626 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:08:39.646255  788146 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:08:39.650839  788146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:08:39.660846  788146 kubeadm.go:884] updating cluster {Name:embed-certs-630626 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:08:39.660974  788146 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:08:39.661030  788146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:08:39.701343  788146 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:08:39.701368  788146 crio.go:433] Images already preloaded, skipping extraction
	I0111 09:08:39.701469  788146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:08:39.728083  788146 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:08:39.728108  788146 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:08:39.728117  788146 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 09:08:39.728223  788146 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-630626 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:08:39.728314  788146 ssh_runner.go:195] Run: crio config
	I0111 09:08:39.827570  788146 cni.go:84] Creating CNI manager for ""
	I0111 09:08:39.827602  788146 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:08:39.827632  788146 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:08:39.827662  788146 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-630626 NodeName:embed-certs-630626 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:08:39.827795  788146 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-630626"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:08:39.827869  788146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 09:08:39.837434  788146 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:08:39.837524  788146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:08:39.846744  788146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0111 09:08:39.862971  788146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:08:39.878111  788146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0111 09:08:39.893728  788146 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:08:39.898064  788146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:08:39.909635  788146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:08:40.196149  788146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:08:40.218008  788146 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626 for IP: 192.168.85.2
	I0111 09:08:40.218038  788146 certs.go:195] generating shared ca certs ...
	I0111 09:08:40.218055  788146 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:08:40.218226  788146 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:08:40.218272  788146 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:08:40.218280  788146 certs.go:257] generating profile certs ...
	I0111 09:08:40.218376  788146 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/client.key
	I0111 09:08:40.218457  788146 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.key.d6bdd2b3
	I0111 09:08:40.218507  788146 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.key
	I0111 09:08:40.218641  788146 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:08:40.218684  788146 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:08:40.218693  788146 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:08:40.218735  788146 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:08:40.218759  788146 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:08:40.218781  788146 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:08:40.218823  788146 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:08:40.219456  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:08:40.265472  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:08:40.293730  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:08:40.346082  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:08:40.381089  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0111 09:08:40.420145  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 09:08:40.461644  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:08:40.532362  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/embed-certs-630626/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 09:08:40.583144  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:08:40.636119  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:08:40.681423  788146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:08:40.713657  788146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:08:40.735401  788146 ssh_runner.go:195] Run: openssl version
	I0111 09:08:40.744911  788146 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:08:40.760132  788146 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:08:40.780463  788146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:08:40.789120  788146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:08:40.789218  788146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:08:40.850188  788146 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:08:40.857855  788146 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:08:40.871639  788146 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:08:40.882456  788146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:08:40.888789  788146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:08:40.888888  788146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:08:40.940032  788146 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:08:40.947833  788146 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:08:40.956202  788146 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:08:40.964065  788146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:08:40.970106  788146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:08:40.970250  788146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:08:41.015312  788146 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:08:41.023148  788146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:08:41.027716  788146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 09:08:41.080919  788146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 09:08:41.191546  788146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 09:08:41.311582  788146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 09:08:41.482565  788146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 09:08:41.611590  788146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 09:08:41.748620  788146 kubeadm.go:401] StartCluster: {Name:embed-certs-630626 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-630626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:08:41.748747  788146 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:08:41.748850  788146 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:08:41.805324  788146 cri.go:96] found id: "59166e3edc5b1c5b88038cb476fcc1bb937cc685c07c9cc1684740b373d960e6"
	I0111 09:08:41.805348  788146 cri.go:96] found id: "d655f1b34c99b7061f83f1625edf83fdeafc1d3bd3a3df8027784d5a67499088"
	I0111 09:08:41.805363  788146 cri.go:96] found id: "6e1ee699631c60b05b3bf5f637dc3dc66eaa29e2df72af24028e423f9e31416f"
	I0111 09:08:41.805366  788146 cri.go:96] found id: "50f8850ccb505fa89954b440b9419765295b2320ecae2ea5cb7da62fd4a99f39"
	I0111 09:08:41.805394  788146 cri.go:96] found id: ""
	I0111 09:08:41.805467  788146 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 09:08:41.821316  788146 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:08:41Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:08:41.821431  788146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:08:41.842482  788146 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 09:08:41.842504  788146 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 09:08:41.842634  788146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 09:08:41.858093  788146 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 09:08:41.858776  788146 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-630626" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:08:41.859111  788146 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-575040/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-630626" cluster setting kubeconfig missing "embed-certs-630626" context setting]
	I0111 09:08:41.870263  788146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:08:41.873407  788146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 09:08:41.908473  788146 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0111 09:08:41.908505  788146 kubeadm.go:602] duration metric: took 65.994128ms to restartPrimaryControlPlane
	I0111 09:08:41.908515  788146 kubeadm.go:403] duration metric: took 159.904538ms to StartCluster
	I0111 09:08:41.908531  788146 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:08:41.908592  788146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:08:41.909881  788146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:08:41.910106  788146 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:08:41.910634  788146 config.go:182] Loaded profile config "embed-certs-630626": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:08:41.910643  788146 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:08:41.910734  788146 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-630626"
	I0111 09:08:41.910756  788146 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-630626"
	W0111 09:08:41.910766  788146 addons.go:248] addon storage-provisioner should already be in state true
	I0111 09:08:41.910759  788146 addons.go:70] Setting dashboard=true in profile "embed-certs-630626"
	I0111 09:08:41.910793  788146 host.go:66] Checking if "embed-certs-630626" exists ...
	I0111 09:08:41.910798  788146 addons.go:239] Setting addon dashboard=true in "embed-certs-630626"
	W0111 09:08:41.910833  788146 addons.go:248] addon dashboard should already be in state true
	I0111 09:08:41.910866  788146 host.go:66] Checking if "embed-certs-630626" exists ...
	I0111 09:08:41.911270  788146 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:08:41.911375  788146 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:08:41.911887  788146 addons.go:70] Setting default-storageclass=true in profile "embed-certs-630626"
	I0111 09:08:41.911922  788146 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-630626"
	I0111 09:08:41.912242  788146 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:08:41.916447  788146 out.go:179] * Verifying Kubernetes components...
	I0111 09:08:41.923327  788146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:08:41.971945  788146 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 09:08:41.976586  788146 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 09:08:41.979640  788146 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:08:41.982188  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 09:08:41.982211  788146 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 09:08:41.982298  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:41.983131  788146 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:08:41.983156  788146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:08:41.983210  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:41.985823  788146 addons.go:239] Setting addon default-storageclass=true in "embed-certs-630626"
	W0111 09:08:41.985856  788146 addons.go:248] addon default-storageclass should already be in state true
	I0111 09:08:41.985879  788146 host.go:66] Checking if "embed-certs-630626" exists ...
	I0111 09:08:41.991713  788146 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:08:42.037716  788146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:08:42.051714  788146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:08:42.054421  788146 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:08:42.054443  788146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:08:42.054507  788146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:08:42.082381  788146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:08:42.477438  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 09:08:42.477526  788146 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 09:08:42.561421  788146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:08:42.567321  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 09:08:42.567397  788146 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 09:08:42.607318  788146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:08:42.653141  788146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:08:42.685373  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 09:08:42.685397  788146 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 09:08:39.951470  785363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:08:39.951493  785363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:08:39.951560  785363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:08:39.963245  785363 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-588333"
	I0111 09:08:39.963284  785363 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:08:39.963730  785363 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:08:39.984572  785363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:08:40.009495  785363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:08:40.009522  785363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:08:40.009605  785363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:08:40.048174  785363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:08:40.547597  785363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:08:40.597653  785363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:08:40.771367  785363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 09:08:40.771501  785363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:08:41.641135  785363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.09350362s)
	I0111 09:08:42.858534  785363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.260842438s)
	I0111 09:08:42.858732  785363 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.087196956s)
	I0111 09:08:42.859843  785363 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-588333" to be "Ready" ...
	I0111 09:08:42.860060  785363 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.088655986s)
	I0111 09:08:42.860074  785363 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0111 09:08:42.863432  785363 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0111 09:08:42.866453  785363 addons.go:530] duration metric: took 2.95430948s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0111 09:08:43.364360  785363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-588333" context rescaled to 1 replicas
	I0111 09:08:42.838215  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 09:08:42.838242  788146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 09:08:42.924044  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 09:08:42.924066  788146 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 09:08:43.080801  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 09:08:43.080868  788146 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 09:08:43.199621  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 09:08:43.199647  788146 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 09:08:43.238632  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 09:08:43.238659  788146 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 09:08:43.294100  788146 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:08:43.294209  788146 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 09:08:43.338640  788146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:08:48.009627  788146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.448103152s)
	I0111 09:08:48.009639  788146 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.402274001s)
	I0111 09:08:48.009668  788146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.356501506s)
	I0111 09:08:48.009704  788146 node_ready.go:35] waiting up to 6m0s for node "embed-certs-630626" to be "Ready" ...
	I0111 09:08:48.010070  788146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.671393177s)
	I0111 09:08:48.013422  788146 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-630626 addons enable metrics-server
	
	I0111 09:08:48.033891  788146 node_ready.go:49] node "embed-certs-630626" is "Ready"
	I0111 09:08:48.033922  788146 node_ready.go:38] duration metric: took 24.15387ms for node "embed-certs-630626" to be "Ready" ...
	I0111 09:08:48.033961  788146 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:08:48.034051  788146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:08:48.049249  788146 api_server.go:72] duration metric: took 6.138937061s to wait for apiserver process to appear ...
	I0111 09:08:48.049285  788146 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:08:48.049305  788146 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:08:48.057984  788146 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 09:08:48.059151  788146 api_server.go:141] control plane version: v1.35.0
	I0111 09:08:48.059178  788146 api_server.go:131] duration metric: took 9.885329ms to wait for apiserver health ...
	I0111 09:08:48.059188  788146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:08:48.060289  788146 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W0111 09:08:44.862693  785363 node_ready.go:57] node "default-k8s-diff-port-588333" has "Ready":"False" status (will retry)
	W0111 09:08:46.862916  785363 node_ready.go:57] node "default-k8s-diff-port-588333" has "Ready":"False" status (will retry)
	I0111 09:08:48.063152  788146 addons.go:530] duration metric: took 6.152504382s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0111 09:08:48.063168  788146 system_pods.go:59] 8 kube-system pods found
	I0111 09:08:48.063214  788146 system_pods.go:61] "coredns-7d764666f9-x5tzj" [6885d7ae-0fc5-41e2-b94f-baff32115d85] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:08:48.063224  788146 system_pods.go:61] "etcd-embed-certs-630626" [d5f1c15c-1914-46ff-b4aa-4375c6a525d7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:08:48.063230  788146 system_pods.go:61] "kindnet-w5nb5" [70770c96-0cce-46bf-b231-3d8af21b400d] Running
	I0111 09:08:48.063239  788146 system_pods.go:61] "kube-apiserver-embed-certs-630626" [96cb11be-de0a-4229-8ff1-771a27d40028] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:08:48.063246  788146 system_pods.go:61] "kube-controller-manager-embed-certs-630626" [dc584cb9-9f28-4ba3-8c7a-d90a9bd3c17a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:08:48.063251  788146 system_pods.go:61] "kube-proxy-7xnsq" [870f58a8-0905-4b5e-aa6a-b6c96819a399] Running
	I0111 09:08:48.063259  788146 system_pods.go:61] "kube-scheduler-embed-certs-630626" [a5726c35-f858-4559-b5d4-c19dbaa74a86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:08:48.063263  788146 system_pods.go:61] "storage-provisioner" [d5ac437b-5119-4b10-9625-81ff88fee999] Running
	I0111 09:08:48.063269  788146 system_pods.go:74] duration metric: took 4.075704ms to wait for pod list to return data ...
	I0111 09:08:48.063275  788146 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:08:48.065966  788146 default_sa.go:45] found service account: "default"
	I0111 09:08:48.065993  788146 default_sa.go:55] duration metric: took 2.708794ms for default service account to be created ...
	I0111 09:08:48.066003  788146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:08:48.068555  788146 system_pods.go:86] 8 kube-system pods found
	I0111 09:08:48.068594  788146 system_pods.go:89] "coredns-7d764666f9-x5tzj" [6885d7ae-0fc5-41e2-b94f-baff32115d85] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:08:48.068603  788146 system_pods.go:89] "etcd-embed-certs-630626" [d5f1c15c-1914-46ff-b4aa-4375c6a525d7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:08:48.068608  788146 system_pods.go:89] "kindnet-w5nb5" [70770c96-0cce-46bf-b231-3d8af21b400d] Running
	I0111 09:08:48.068616  788146 system_pods.go:89] "kube-apiserver-embed-certs-630626" [96cb11be-de0a-4229-8ff1-771a27d40028] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:08:48.068623  788146 system_pods.go:89] "kube-controller-manager-embed-certs-630626" [dc584cb9-9f28-4ba3-8c7a-d90a9bd3c17a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:08:48.068634  788146 system_pods.go:89] "kube-proxy-7xnsq" [870f58a8-0905-4b5e-aa6a-b6c96819a399] Running
	I0111 09:08:48.068641  788146 system_pods.go:89] "kube-scheduler-embed-certs-630626" [a5726c35-f858-4559-b5d4-c19dbaa74a86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:08:48.068650  788146 system_pods.go:89] "storage-provisioner" [d5ac437b-5119-4b10-9625-81ff88fee999] Running
	I0111 09:08:48.068657  788146 system_pods.go:126] duration metric: took 2.648527ms to wait for k8s-apps to be running ...
	I0111 09:08:48.068669  788146 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:08:48.068726  788146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:08:48.082600  788146 system_svc.go:56] duration metric: took 13.920515ms WaitForService to wait for kubelet
	I0111 09:08:48.082629  788146 kubeadm.go:587] duration metric: took 6.172322672s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:08:48.082647  788146 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:08:48.085215  788146 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:08:48.085246  788146 node_conditions.go:123] node cpu capacity is 2
	I0111 09:08:48.085261  788146 node_conditions.go:105] duration metric: took 2.60778ms to run NodePressure ...
	I0111 09:08:48.085300  788146 start.go:242] waiting for startup goroutines ...
	I0111 09:08:48.085316  788146 start.go:247] waiting for cluster config update ...
	I0111 09:08:48.085329  788146 start.go:256] writing updated cluster config ...
	I0111 09:08:48.085645  788146 ssh_runner.go:195] Run: rm -f paused
	I0111 09:08:48.089433  788146 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:08:48.093590  788146 pod_ready.go:83] waiting for pod "coredns-7d764666f9-x5tzj" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 09:08:50.118547  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:08:52.598627  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:08:48.863623  785363 node_ready.go:57] node "default-k8s-diff-port-588333" has "Ready":"False" status (will retry)
	W0111 09:08:51.363306  785363 node_ready.go:57] node "default-k8s-diff-port-588333" has "Ready":"False" status (will retry)
	W0111 09:08:54.601772  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:08:56.601913  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:08:53.863589  785363 node_ready.go:57] node "default-k8s-diff-port-588333" has "Ready":"False" status (will retry)
	I0111 09:08:55.872125  785363 node_ready.go:49] node "default-k8s-diff-port-588333" is "Ready"
	I0111 09:08:55.872159  785363 node_ready.go:38] duration metric: took 13.012290307s for node "default-k8s-diff-port-588333" to be "Ready" ...
	I0111 09:08:55.872174  785363 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:08:55.872232  785363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:08:55.924680  785363 api_server.go:72] duration metric: took 16.012902651s to wait for apiserver process to appear ...
	I0111 09:08:55.924705  785363 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:08:55.924723  785363 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0111 09:08:55.971410  785363 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0111 09:08:55.972590  785363 api_server.go:141] control plane version: v1.35.0
	I0111 09:08:55.972612  785363 api_server.go:131] duration metric: took 47.900234ms to wait for apiserver health ...
	I0111 09:08:55.972621  785363 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:08:55.990380  785363 system_pods.go:59] 8 kube-system pods found
	I0111 09:08:55.990409  785363 system_pods.go:61] "coredns-7d764666f9-2lh6p" [54a6cea1-73a3-4ca6-bd7a-afbbac903c9b] Pending
	I0111 09:08:55.990417  785363 system_pods.go:61] "etcd-default-k8s-diff-port-588333" [ac8ac94a-7e8c-4899-98e5-a36f9dcaa48c] Running
	I0111 09:08:55.990421  785363 system_pods.go:61] "kindnet-8pg22" [d8bfcb3a-747f-4072-9916-be69d991bcea] Running
	I0111 09:08:55.990426  785363 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-588333" [caae5ef6-ad07-477b-904c-95d13dd2c926] Running
	I0111 09:08:55.990431  785363 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-588333" [e8274e7b-a729-43ee-8e0a-c9f156d0bdca] Running
	I0111 09:08:55.990439  785363 system_pods.go:61] "kube-proxy-g4x2l" [23972631-486c-42e5-a029-569447059d31] Running
	I0111 09:08:55.990444  785363 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-588333" [0d5718df-db89-41b2-9cb6-c52b1c63fa5f] Running
	I0111 09:08:55.990451  785363 system_pods.go:61] "storage-provisioner" [acdfb8c3-6907-4ce4-b95f-2369474a2ece] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:08:55.990457  785363 system_pods.go:74] duration metric: took 17.831737ms to wait for pod list to return data ...
	I0111 09:08:55.990465  785363 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:08:55.994784  785363 default_sa.go:45] found service account: "default"
	I0111 09:08:55.994806  785363 default_sa.go:55] duration metric: took 4.335202ms for default service account to be created ...
	I0111 09:08:55.994817  785363 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:08:56.000758  785363 system_pods.go:86] 8 kube-system pods found
	I0111 09:08:56.000839  785363 system_pods.go:89] "coredns-7d764666f9-2lh6p" [54a6cea1-73a3-4ca6-bd7a-afbbac903c9b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:08:56.000863  785363 system_pods.go:89] "etcd-default-k8s-diff-port-588333" [ac8ac94a-7e8c-4899-98e5-a36f9dcaa48c] Running
	I0111 09:08:56.000907  785363 system_pods.go:89] "kindnet-8pg22" [d8bfcb3a-747f-4072-9916-be69d991bcea] Running
	I0111 09:08:56.000930  785363 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-588333" [caae5ef6-ad07-477b-904c-95d13dd2c926] Running
	I0111 09:08:56.000952  785363 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-588333" [e8274e7b-a729-43ee-8e0a-c9f156d0bdca] Running
	I0111 09:08:56.000988  785363 system_pods.go:89] "kube-proxy-g4x2l" [23972631-486c-42e5-a029-569447059d31] Running
	I0111 09:08:56.001009  785363 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-588333" [0d5718df-db89-41b2-9cb6-c52b1c63fa5f] Running
	I0111 09:08:56.001030  785363 system_pods.go:89] "storage-provisioner" [acdfb8c3-6907-4ce4-b95f-2369474a2ece] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:08:56.001088  785363 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0111 09:08:56.320863  785363 system_pods.go:86] 8 kube-system pods found
	I0111 09:08:56.320971  785363 system_pods.go:89] "coredns-7d764666f9-2lh6p" [54a6cea1-73a3-4ca6-bd7a-afbbac903c9b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:08:56.321014  785363 system_pods.go:89] "etcd-default-k8s-diff-port-588333" [ac8ac94a-7e8c-4899-98e5-a36f9dcaa48c] Running
	I0111 09:08:56.321045  785363 system_pods.go:89] "kindnet-8pg22" [d8bfcb3a-747f-4072-9916-be69d991bcea] Running
	I0111 09:08:56.321066  785363 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-588333" [caae5ef6-ad07-477b-904c-95d13dd2c926] Running
	I0111 09:08:56.321102  785363 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-588333" [e8274e7b-a729-43ee-8e0a-c9f156d0bdca] Running
	I0111 09:08:56.321127  785363 system_pods.go:89] "kube-proxy-g4x2l" [23972631-486c-42e5-a029-569447059d31] Running
	I0111 09:08:56.321148  785363 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-588333" [0d5718df-db89-41b2-9cb6-c52b1c63fa5f] Running
	I0111 09:08:56.321185  785363 system_pods.go:89] "storage-provisioner" [acdfb8c3-6907-4ce4-b95f-2369474a2ece] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:08:56.583052  785363 system_pods.go:86] 8 kube-system pods found
	I0111 09:08:56.583085  785363 system_pods.go:89] "coredns-7d764666f9-2lh6p" [54a6cea1-73a3-4ca6-bd7a-afbbac903c9b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:08:56.583092  785363 system_pods.go:89] "etcd-default-k8s-diff-port-588333" [ac8ac94a-7e8c-4899-98e5-a36f9dcaa48c] Running
	I0111 09:08:56.583098  785363 system_pods.go:89] "kindnet-8pg22" [d8bfcb3a-747f-4072-9916-be69d991bcea] Running
	I0111 09:08:56.583103  785363 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-588333" [caae5ef6-ad07-477b-904c-95d13dd2c926] Running
	I0111 09:08:56.583107  785363 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-588333" [e8274e7b-a729-43ee-8e0a-c9f156d0bdca] Running
	I0111 09:08:56.583112  785363 system_pods.go:89] "kube-proxy-g4x2l" [23972631-486c-42e5-a029-569447059d31] Running
	I0111 09:08:56.583116  785363 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-588333" [0d5718df-db89-41b2-9cb6-c52b1c63fa5f] Running
	I0111 09:08:56.583122  785363 system_pods.go:89] "storage-provisioner" [acdfb8c3-6907-4ce4-b95f-2369474a2ece] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 09:08:56.915614  785363 system_pods.go:86] 8 kube-system pods found
	I0111 09:08:56.915646  785363 system_pods.go:89] "coredns-7d764666f9-2lh6p" [54a6cea1-73a3-4ca6-bd7a-afbbac903c9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:08:56.915653  785363 system_pods.go:89] "etcd-default-k8s-diff-port-588333" [ac8ac94a-7e8c-4899-98e5-a36f9dcaa48c] Running
	I0111 09:08:56.915658  785363 system_pods.go:89] "kindnet-8pg22" [d8bfcb3a-747f-4072-9916-be69d991bcea] Running
	I0111 09:08:56.915662  785363 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-588333" [caae5ef6-ad07-477b-904c-95d13dd2c926] Running
	I0111 09:08:56.915670  785363 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-588333" [e8274e7b-a729-43ee-8e0a-c9f156d0bdca] Running
	I0111 09:08:56.915674  785363 system_pods.go:89] "kube-proxy-g4x2l" [23972631-486c-42e5-a029-569447059d31] Running
	I0111 09:08:56.915679  785363 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-588333" [0d5718df-db89-41b2-9cb6-c52b1c63fa5f] Running
	I0111 09:08:56.915683  785363 system_pods.go:89] "storage-provisioner" [acdfb8c3-6907-4ce4-b95f-2369474a2ece] Running
	I0111 09:08:56.915691  785363 system_pods.go:126] duration metric: took 920.869709ms to wait for k8s-apps to be running ...
	I0111 09:08:56.915699  785363 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:08:56.915773  785363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:08:56.948999  785363 system_svc.go:56] duration metric: took 33.291194ms WaitForService to wait for kubelet
	I0111 09:08:56.949086  785363 kubeadm.go:587] duration metric: took 17.037311201s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:08:56.949120  785363 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:08:56.978627  785363 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:08:56.978656  785363 node_conditions.go:123] node cpu capacity is 2
	I0111 09:08:56.978670  785363 node_conditions.go:105] duration metric: took 29.532195ms to run NodePressure ...
	I0111 09:08:56.978683  785363 start.go:242] waiting for startup goroutines ...
	I0111 09:08:56.978690  785363 start.go:247] waiting for cluster config update ...
	I0111 09:08:56.978701  785363 start.go:256] writing updated cluster config ...
	I0111 09:08:56.978984  785363 ssh_runner.go:195] Run: rm -f paused
	I0111 09:08:56.984892  785363 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:08:56.995670  785363 pod_ready.go:83] waiting for pod "coredns-7d764666f9-2lh6p" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:57.007787  785363 pod_ready.go:94] pod "coredns-7d764666f9-2lh6p" is "Ready"
	I0111 09:08:57.007870  785363 pod_ready.go:86] duration metric: took 12.17559ms for pod "coredns-7d764666f9-2lh6p" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:57.015874  785363 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:57.023497  785363 pod_ready.go:94] pod "etcd-default-k8s-diff-port-588333" is "Ready"
	I0111 09:08:57.023581  785363 pod_ready.go:86] duration metric: took 7.585008ms for pod "etcd-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:57.028208  785363 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:57.035260  785363 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-588333" is "Ready"
	I0111 09:08:57.035339  785363 pod_ready.go:86] duration metric: took 7.065683ms for pod "kube-apiserver-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:57.053809  785363 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:57.389090  785363 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-588333" is "Ready"
	I0111 09:08:57.389159  785363 pod_ready.go:86] duration metric: took 335.227185ms for pod "kube-controller-manager-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:57.589262  785363 pod_ready.go:83] waiting for pod "kube-proxy-g4x2l" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:57.989810  785363 pod_ready.go:94] pod "kube-proxy-g4x2l" is "Ready"
	I0111 09:08:57.989884  785363 pod_ready.go:86] duration metric: took 400.553006ms for pod "kube-proxy-g4x2l" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:58.189132  785363 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:58.589921  785363 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-588333" is "Ready"
	I0111 09:08:58.589990  785363 pod_ready.go:86] duration metric: took 400.790785ms for pod "kube-scheduler-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:08:58.590017  785363 pod_ready.go:40] duration metric: took 1.605097221s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:08:58.671544  785363 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:08:58.676572  785363 out.go:203] 
	W0111 09:08:58.681452  785363 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:08:58.684978  785363 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:08:58.689179  785363 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-588333" cluster and "default" namespace by default
	W0111 09:08:58.604359  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:09:01.099120  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:09:03.099665  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:09:05.598724  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:09:07.599357  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 11 09:08:56 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:56.424409561Z" level=info msg="Created container 697e45f2b7f2ce6fd7cae3307c93b75ee26fe8e3d833ceadbb6c73c778afc277: kube-system/coredns-7d764666f9-2lh6p/coredns" id=bfe3e86e-6ef8-4076-9a27-1326116078e5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:08:56 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:56.426494412Z" level=info msg="Starting container: 697e45f2b7f2ce6fd7cae3307c93b75ee26fe8e3d833ceadbb6c73c778afc277" id=d8e5ad3e-9a36-4a28-9af6-1fff95d3ae37 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:08:56 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:56.428391273Z" level=info msg="Started container" PID=1776 containerID=697e45f2b7f2ce6fd7cae3307c93b75ee26fe8e3d833ceadbb6c73c778afc277 description=kube-system/coredns-7d764666f9-2lh6p/coredns id=d8e5ad3e-9a36-4a28-9af6-1fff95d3ae37 name=/runtime.v1.RuntimeService/StartContainer sandboxID=42f7e83369b0d77e4d2d19be494b247fd862a3dc7aa09930d6c011e1fd422242
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.296701469Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3abd4477-0a38-4909-9b7a-c440e0852a9c name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.2967822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.30637205Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a203e18764827cac39ec775b2ac91b2cf70dc352f35537f137ecdf86adeffa79 UID:d123dc54-6086-4b61-9c4a-b6591f715b33 NetNS:/var/run/netns/35ab6d71-5a5c-41a7-887a-7ed68429150a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000117b40}] Aliases:map[]}"
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.308703978Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.331183446Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a203e18764827cac39ec775b2ac91b2cf70dc352f35537f137ecdf86adeffa79 UID:d123dc54-6086-4b61-9c4a-b6591f715b33 NetNS:/var/run/netns/35ab6d71-5a5c-41a7-887a-7ed68429150a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000117b40}] Aliases:map[]}"
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.331557736Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.338509136Z" level=info msg="Ran pod sandbox a203e18764827cac39ec775b2ac91b2cf70dc352f35537f137ecdf86adeffa79 with infra container: default/busybox/POD" id=3abd4477-0a38-4909-9b7a-c440e0852a9c name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.339824271Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=649a7044-e68c-41e0-87b8-3b4d084d9b54 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.340072389Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=649a7044-e68c-41e0-87b8-3b4d084d9b54 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.340915231Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=649a7044-e68c-41e0-87b8-3b4d084d9b54 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.344571573Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d06b7d7d-5de5-4e0f-885a-bfbadc80ea1d name=/runtime.v1.ImageService/PullImage
	Jan 11 09:08:59 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:08:59.3475377Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.606302138Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d06b7d7d-5de5-4e0f-885a-bfbadc80ea1d name=/runtime.v1.ImageService/PullImage
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.607389142Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8559209b-26bb-4d56-9320-38a343a888b6 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.609280949Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a9124d7f-6b00-46c2-8a2b-7025eab23377 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.615536751Z" level=info msg="Creating container: default/busybox/busybox" id=a2e87221-b723-47a2-9f22-266fa93f842b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.615647784Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.620529881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.621165287Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.638937371Z" level=info msg="Created container 23f65ded491dded163d5b805219eb6ff4be35300eaab0891fb9a702ecaf65184: default/busybox/busybox" id=a2e87221-b723-47a2-9f22-266fa93f842b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.641378059Z" level=info msg="Starting container: 23f65ded491dded163d5b805219eb6ff4be35300eaab0891fb9a702ecaf65184" id=1a082786-a0d1-4972-9db6-39c00148fe45 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:09:01 default-k8s-diff-port-588333 crio[837]: time="2026-01-11T09:09:01.644093754Z" level=info msg="Started container" PID=1837 containerID=23f65ded491dded163d5b805219eb6ff4be35300eaab0891fb9a702ecaf65184 description=default/busybox/busybox id=1a082786-a0d1-4972-9db6-39c00148fe45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a203e18764827cac39ec775b2ac91b2cf70dc352f35537f137ecdf86adeffa79
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	23f65ded491dd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 seconds ago       Running             busybox                   0                   a203e18764827       busybox                                                default
	697e45f2b7f2c       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      12 seconds ago      Running             coredns                   0                   42f7e83369b0d       coredns-7d764666f9-2lh6p                               kube-system
	eab36d6e9dd32       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   8f651c0e1fdf3       storage-provisioner                                    kube-system
	3ac0883372da2       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   cdddd0ca9d3f5       kindnet-8pg22                                          kube-system
	c3fe74ed13f16       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      27 seconds ago      Running             kube-proxy                0                   4fb96bac42864       kube-proxy-g4x2l                                       kube-system
	09ecbfa1ea045       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      39 seconds ago      Running             etcd                      0                   f3faea2f7d277       etcd-default-k8s-diff-port-588333                      kube-system
	cedd4794f6f22       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      39 seconds ago      Running             kube-apiserver            0                   afdfdb8ddda72       kube-apiserver-default-k8s-diff-port-588333            kube-system
	d335641214df6       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      39 seconds ago      Running             kube-scheduler            0                   28df86e57603b       kube-scheduler-default-k8s-diff-port-588333            kube-system
	16a3d7c4320ef       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      39 seconds ago      Running             kube-controller-manager   0                   ed9cad6d039c8       kube-controller-manager-default-k8s-diff-port-588333   kube-system
	
	
	==> coredns [697e45f2b7f2ce6fd7cae3307c93b75ee26fe8e3d833ceadbb6c73c778afc277] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41300 - 39280 "HINFO IN 8207859819737658534.4981668668781650504. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023594428s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-588333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-588333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=default-k8s-diff-port-588333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_08_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:08:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-588333
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:09:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:09:05 +0000   Sun, 11 Jan 2026 09:08:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:09:05 +0000   Sun, 11 Jan 2026 09:08:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:09:05 +0000   Sun, 11 Jan 2026 09:08:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:09:05 +0000   Sun, 11 Jan 2026 09:08:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-588333
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                3726b86b-01d8-43b3-a465-e0aaf1859904
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-2lh6p                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-default-k8s-diff-port-588333                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-8pg22                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-588333             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-588333    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-g4x2l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-588333             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node default-k8s-diff-port-588333 event: Registered Node default-k8s-diff-port-588333 in Controller
	
	
	==> dmesg <==
	[Jan11 08:37] overlayfs: idmapped layers are currently not supported
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	[Jan11 09:08] overlayfs: idmapped layers are currently not supported
	[ +12.491892] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [09ecbfa1ea045873d916aa84db0f0a213f66c6da260b62f893db63a6cf6a7ef7] <==
	{"level":"info","ts":"2026-01-11T09:08:28.855739Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:08:29.418276Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-11T09:08:29.418428Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T09:08:29.418537Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-11T09:08:29.418636Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:08:29.418680Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:29.420513Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:29.420607Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:08:29.420662Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:29.420706Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:29.422316Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-588333 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:08:29.422414Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:08:29.422449Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:08:29.422458Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:08:29.422650Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:08:29.430338Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:08:29.449781Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:08:29.451251Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:08:29.451357Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:08:29.450590Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:08:29.453803Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-11T09:08:29.458188Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T09:08:29.451024Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:08:29.462944Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:08:29.467115Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 09:09:08 up  3:51,  0 user,  load average: 3.66, 2.12, 1.99
	Linux default-k8s-diff-port-588333 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3ac0883372da2f283f051413f677c8ba0f72aad66de2864713d6503a46017cf3] <==
	I0111 09:08:45.243807       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:08:45.244260       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 09:08:45.244490       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:08:45.244540       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:08:45.244586       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:08:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:08:45.541789       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:08:45.541895       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:08:45.541939       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:08:45.543081       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0111 09:08:45.742464       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:08:45.742566       1 metrics.go:72] Registering metrics
	I0111 09:08:45.742667       1 controller.go:711] "Syncing nftables rules"
	I0111 09:08:55.540816       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 09:08:55.540874       1 main.go:301] handling current node
	I0111 09:09:05.542333       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 09:09:05.542474       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cedd4794f6f22d9b8b8d511abca6cfb4fd3a8b05faaed28758dfe41ddf82e0b4] <==
	I0111 09:08:31.665859       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I0111 09:08:31.670151       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0111 09:08:31.670162       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:08:31.674827       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0111 09:08:31.676931       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:08:31.680633       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:08:31.843252       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:08:32.298363       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 09:08:32.306954       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 09:08:32.307038       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:08:33.325944       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:08:33.490901       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:08:33.620155       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 09:08:33.648689       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0111 09:08:33.650025       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 09:08:33.663977       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:08:34.402668       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:08:34.678210       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:08:34.745364       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 09:08:34.759775       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 09:08:40.455374       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 09:08:40.473189       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:08:40.493882       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0111 09:08:40.539771       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0111 09:09:07.124204       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:42968: use of closed network connection
	
	
	==> kube-controller-manager [16a3d7c4320ef439d8121391e1d37575a3f4d1db70a4b69afdbd509508674bfc] <==
	I0111 09:08:39.334986       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.335054       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.335434       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.334734       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.335762       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.334745       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.334753       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.334760       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.334767       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.337781       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.334773       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.334792       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.334803       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.338381       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 09:08:39.342837       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-588333"
	I0111 09:08:39.342913       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 09:08:39.334878       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.367795       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.370743       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.371033       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.371091       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:08:39.371121       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:08:39.373533       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:39.391051       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:59.346276       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c3fe74ed13f16483461b2161101cc258d5383ce8562b9821c60564b65f5ff5fd] <==
	I0111 09:08:42.225402       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:08:42.424173       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:08:42.526335       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:42.526367       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 09:08:42.526462       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:08:42.661252       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:08:42.661313       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:08:42.697848       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:08:42.698249       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:08:42.698272       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:08:42.769604       1 config.go:200] "Starting service config controller"
	I0111 09:08:42.769627       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:08:42.769664       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:08:42.769669       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:08:42.769681       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:08:42.769685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:08:42.769889       1 config.go:309] "Starting node config controller"
	I0111 09:08:42.769904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:08:42.769912       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:08:42.885861       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 09:08:42.885902       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 09:08:42.885939       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d335641214df604ed4e39c92edd5db01980c0ca9fb9ae730a48e063292934db4] <==
	E0111 09:08:31.566990       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 09:08:31.567103       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 09:08:31.567207       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 09:08:31.567740       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 09:08:31.582451       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 09:08:31.582690       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 09:08:31.582805       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 09:08:31.582973       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0111 09:08:32.375564       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0111 09:08:32.385341       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 09:08:32.412336       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 09:08:32.468115       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 09:08:32.492456       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 09:08:32.529130       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 09:08:32.649547       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 09:08:32.714185       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0111 09:08:32.751737       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 09:08:32.779514       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0111 09:08:32.781287       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0111 09:08:32.808690       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 09:08:32.830335       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 09:08:32.857217       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 09:08:32.935083       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 09:08:32.949130       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I0111 09:08:34.513042       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:08:40 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:40.902887    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8bfcb3a-747f-4072-9916-be69d991bcea-xtables-lock\") pod \"kindnet-8pg22\" (UID: \"d8bfcb3a-747f-4072-9916-be69d991bcea\") " pod="kube-system/kindnet-8pg22"
	Jan 11 09:08:41 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:41.033245    1297 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 11 09:08:41 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:41.399550    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-588333" containerName="etcd"
	Jan 11 09:08:41 default-k8s-diff-port-588333 kubelet[1297]: W0111 09:08:41.414749    1297 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/crio-cdddd0ca9d3f561f1f989489a5f5c0859cbf9292df93c5079b073204ff610e08 WatchSource:0}: Error finding container cdddd0ca9d3f561f1f989489a5f5c0859cbf9292df93c5079b073204ff610e08: Status 404 returned error can't find the container with id cdddd0ca9d3f561f1f989489a5f5c0859cbf9292df93c5079b073204ff610e08
	Jan 11 09:08:43 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:43.397621    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-588333" containerName="kube-controller-manager"
	Jan 11 09:08:43 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:43.433903    1297 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g4x2l" podStartSLOduration=3.433886198 podStartE2EDuration="3.433886198s" podCreationTimestamp="2026-01-11 09:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:08:41.827935094 +0000 UTC m=+7.356786741" watchObservedRunningTime="2026-01-11 09:08:43.433886198 +0000 UTC m=+8.962737845"
	Jan 11 09:08:43 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:43.893832    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-588333" containerName="kube-scheduler"
	Jan 11 09:08:44 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:43.999705    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-588333" containerName="kube-apiserver"
	Jan 11 09:08:51 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:51.397117    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-588333" containerName="etcd"
	Jan 11 09:08:51 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:51.411160    1297 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-8pg22" podStartSLOduration=7.87302603 podStartE2EDuration="11.411146011s" podCreationTimestamp="2026-01-11 09:08:40 +0000 UTC" firstStartedPulling="2026-01-11 09:08:41.454813086 +0000 UTC m=+6.983664725" lastFinishedPulling="2026-01-11 09:08:44.992933067 +0000 UTC m=+10.521784706" observedRunningTime="2026-01-11 09:08:45.850756676 +0000 UTC m=+11.379608323" watchObservedRunningTime="2026-01-11 09:08:51.411146011 +0000 UTC m=+16.939997649"
	Jan 11 09:08:53 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:53.410449    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-588333" containerName="kube-controller-manager"
	Jan 11 09:08:53 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:53.902431    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-588333" containerName="kube-scheduler"
	Jan 11 09:08:54 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:54.013089    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-588333" containerName="kube-apiserver"
	Jan 11 09:08:55 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:55.811405    1297 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 11 09:08:56 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:56.015981    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/acdfb8c3-6907-4ce4-b95f-2369474a2ece-tmp\") pod \"storage-provisioner\" (UID: \"acdfb8c3-6907-4ce4-b95f-2369474a2ece\") " pod="kube-system/storage-provisioner"
	Jan 11 09:08:56 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:56.016035    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-779gj\" (UniqueName: \"kubernetes.io/projected/acdfb8c3-6907-4ce4-b95f-2369474a2ece-kube-api-access-779gj\") pod \"storage-provisioner\" (UID: \"acdfb8c3-6907-4ce4-b95f-2369474a2ece\") " pod="kube-system/storage-provisioner"
	Jan 11 09:08:56 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:56.016059    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54a6cea1-73a3-4ca6-bd7a-afbbac903c9b-config-volume\") pod \"coredns-7d764666f9-2lh6p\" (UID: \"54a6cea1-73a3-4ca6-bd7a-afbbac903c9b\") " pod="kube-system/coredns-7d764666f9-2lh6p"
	Jan 11 09:08:56 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:56.016081    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zctx\" (UniqueName: \"kubernetes.io/projected/54a6cea1-73a3-4ca6-bd7a-afbbac903c9b-kube-api-access-7zctx\") pod \"coredns-7d764666f9-2lh6p\" (UID: \"54a6cea1-73a3-4ca6-bd7a-afbbac903c9b\") " pod="kube-system/coredns-7d764666f9-2lh6p"
	Jan 11 09:08:56 default-k8s-diff-port-588333 kubelet[1297]: W0111 09:08:56.336230    1297 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/crio-42f7e83369b0d77e4d2d19be494b247fd862a3dc7aa09930d6c011e1fd422242 WatchSource:0}: Error finding container 42f7e83369b0d77e4d2d19be494b247fd862a3dc7aa09930d6c011e1fd422242: Status 404 returned error can't find the container with id 42f7e83369b0d77e4d2d19be494b247fd862a3dc7aa09930d6c011e1fd422242
	Jan 11 09:08:56 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:56.855798    1297 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2lh6p" containerName="coredns"
	Jan 11 09:08:56 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:56.900423    1297 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-2lh6p" podStartSLOduration=16.900396578 podStartE2EDuration="16.900396578s" podCreationTimestamp="2026-01-11 09:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:08:56.877658202 +0000 UTC m=+22.406509849" watchObservedRunningTime="2026-01-11 09:08:56.900396578 +0000 UTC m=+22.429248217"
	Jan 11 09:08:56 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:56.945639    1297 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.945621275 podStartE2EDuration="14.945621275s" podCreationTimestamp="2026-01-11 09:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:08:56.903097799 +0000 UTC m=+22.431949446" watchObservedRunningTime="2026-01-11 09:08:56.945621275 +0000 UTC m=+22.474472922"
	Jan 11 09:08:57 default-k8s-diff-port-588333 kubelet[1297]: E0111 09:08:57.863107    1297 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2lh6p" containerName="coredns"
	Jan 11 09:08:59 default-k8s-diff-port-588333 kubelet[1297]: I0111 09:08:59.139821    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmxd9\" (UniqueName: \"kubernetes.io/projected/d123dc54-6086-4b61-9c4a-b6591f715b33-kube-api-access-jmxd9\") pod \"busybox\" (UID: \"d123dc54-6086-4b61-9c4a-b6591f715b33\") " pod="default/busybox"
	Jan 11 09:08:59 default-k8s-diff-port-588333 kubelet[1297]: W0111 09:08:59.336483    1297 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/crio-a203e18764827cac39ec775b2ac91b2cf70dc352f35537f137ecdf86adeffa79 WatchSource:0}: Error finding container a203e18764827cac39ec775b2ac91b2cf70dc352f35537f137ecdf86adeffa79: Status 404 returned error can't find the container with id a203e18764827cac39ec775b2ac91b2cf70dc352f35537f137ecdf86adeffa79
	
	
	==> storage-provisioner [eab36d6e9dd32556c8a30fabc5b9875fc591b2c95edf74072cfcba90e5a1fddc] <==
	I0111 09:08:56.337501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:08:56.385161       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:08:56.388666       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 09:08:56.404262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:56.416990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:08:56.417130       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:08:56.418550       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588333_cd544667-4d4d-4a4b-a46e-7da849251e9a!
	I0111 09:08:56.419673       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ddfb1208-c7f0-4849-a965-1b5d359cfb5d", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-588333_cd544667-4d4d-4a4b-a46e-7da849251e9a became leader
	W0111 09:08:56.419939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:56.487912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:08:56.520128       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588333_cd544667-4d4d-4a4b-a46e-7da849251e9a!
	W0111 09:08:58.492463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:08:58.503952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:00.507762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:00.516895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:02.520487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:02.527170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:04.530974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:04.535649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:06.538998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:06.543456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:08.547263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:08.555423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-588333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-630626 --alsologtostderr -v=1
E0111 09:09:39.953833  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-630626 --alsologtostderr -v=1: exit status 80 (2.370344719s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-630626 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 09:09:38.571384  793729 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:09:38.571530  793729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:38.571537  793729 out.go:374] Setting ErrFile to fd 2...
	I0111 09:09:38.571543  793729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:38.571821  793729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:09:38.572137  793729 out.go:368] Setting JSON to false
	I0111 09:09:38.572150  793729 mustload.go:66] Loading cluster: embed-certs-630626
	I0111 09:09:38.572588  793729 config.go:182] Loaded profile config "embed-certs-630626": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:09:38.573145  793729 cli_runner.go:164] Run: docker container inspect embed-certs-630626 --format={{.State.Status}}
	I0111 09:09:38.595497  793729 host.go:66] Checking if "embed-certs-630626" exists ...
	I0111 09:09:38.595824  793729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:09:38.696473  793729 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2026-01-11 09:09:38.684017717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:09:38.697105  793729 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-630626 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 09:09:38.700726  793729 out.go:179] * Pausing node embed-certs-630626 ... 
	I0111 09:09:38.704608  793729 host.go:66] Checking if "embed-certs-630626" exists ...
	I0111 09:09:38.705048  793729 ssh_runner.go:195] Run: systemctl --version
	I0111 09:09:38.705111  793729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-630626
	I0111 09:09:38.732342  793729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/embed-certs-630626/id_rsa Username:docker}
	I0111 09:09:38.845475  793729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:09:38.862994  793729 pause.go:52] kubelet running: true
	I0111 09:09:38.863071  793729 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:09:39.190647  793729 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:09:39.190740  793729 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:09:39.280779  793729 cri.go:96] found id: "5433957fc000a476a42994d946d0e7a7cd56580b449b098078502bf7e619aca2"
	I0111 09:09:39.280813  793729 cri.go:96] found id: "a82b2a8a7fc65f783a5f00fca30865fd5660c27d20ba8985f978a9336000e0ea"
	I0111 09:09:39.280819  793729 cri.go:96] found id: "7cc6dfe7ebe69d7fa2e4a83fcc9f97ca76f25f233e8dec6c17d486be7da04784"
	I0111 09:09:39.280823  793729 cri.go:96] found id: "444e2483c5a5dafffda230325a3219f14c242a9d4a210093339135b8a262b2cc"
	I0111 09:09:39.280826  793729 cri.go:96] found id: "e7c65de22a34fdcd786dca28f03d4318acafc8cc56ddf2febf531b131750a055"
	I0111 09:09:39.280830  793729 cri.go:96] found id: "59166e3edc5b1c5b88038cb476fcc1bb937cc685c07c9cc1684740b373d960e6"
	I0111 09:09:39.280833  793729 cri.go:96] found id: "d655f1b34c99b7061f83f1625edf83fdeafc1d3bd3a3df8027784d5a67499088"
	I0111 09:09:39.280836  793729 cri.go:96] found id: "6e1ee699631c60b05b3bf5f637dc3dc66eaa29e2df72af24028e423f9e31416f"
	I0111 09:09:39.280839  793729 cri.go:96] found id: "50f8850ccb505fa89954b440b9419765295b2320ecae2ea5cb7da62fd4a99f39"
	I0111 09:09:39.280875  793729 cri.go:96] found id: "c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e"
	I0111 09:09:39.280886  793729 cri.go:96] found id: "aed85adc7d903d573ee408934699b77dc8ca903cc510c2b4cdc9390e57686b60"
	I0111 09:09:39.280889  793729 cri.go:96] found id: ""
	I0111 09:09:39.280958  793729 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:09:39.298652  793729 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:09:39Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:09:39.535144  793729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:09:39.555395  793729 pause.go:52] kubelet running: false
	I0111 09:09:39.555457  793729 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:09:39.788237  793729 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:09:39.788374  793729 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:09:39.890367  793729 cri.go:96] found id: "5433957fc000a476a42994d946d0e7a7cd56580b449b098078502bf7e619aca2"
	I0111 09:09:39.890405  793729 cri.go:96] found id: "a82b2a8a7fc65f783a5f00fca30865fd5660c27d20ba8985f978a9336000e0ea"
	I0111 09:09:39.890411  793729 cri.go:96] found id: "7cc6dfe7ebe69d7fa2e4a83fcc9f97ca76f25f233e8dec6c17d486be7da04784"
	I0111 09:09:39.890415  793729 cri.go:96] found id: "444e2483c5a5dafffda230325a3219f14c242a9d4a210093339135b8a262b2cc"
	I0111 09:09:39.890418  793729 cri.go:96] found id: "e7c65de22a34fdcd786dca28f03d4318acafc8cc56ddf2febf531b131750a055"
	I0111 09:09:39.890422  793729 cri.go:96] found id: "59166e3edc5b1c5b88038cb476fcc1bb937cc685c07c9cc1684740b373d960e6"
	I0111 09:09:39.890444  793729 cri.go:96] found id: "d655f1b34c99b7061f83f1625edf83fdeafc1d3bd3a3df8027784d5a67499088"
	I0111 09:09:39.890454  793729 cri.go:96] found id: "6e1ee699631c60b05b3bf5f637dc3dc66eaa29e2df72af24028e423f9e31416f"
	I0111 09:09:39.890457  793729 cri.go:96] found id: "50f8850ccb505fa89954b440b9419765295b2320ecae2ea5cb7da62fd4a99f39"
	I0111 09:09:39.890463  793729 cri.go:96] found id: "c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e"
	I0111 09:09:39.890467  793729 cri.go:96] found id: "aed85adc7d903d573ee408934699b77dc8ca903cc510c2b4cdc9390e57686b60"
	I0111 09:09:39.890486  793729 cri.go:96] found id: ""
	I0111 09:09:39.890566  793729 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:09:40.460072  793729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:09:40.481681  793729 pause.go:52] kubelet running: false
	I0111 09:09:40.481787  793729 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:09:40.684182  793729 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:09:40.684307  793729 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:09:40.825354  793729 cri.go:96] found id: "5433957fc000a476a42994d946d0e7a7cd56580b449b098078502bf7e619aca2"
	I0111 09:09:40.825381  793729 cri.go:96] found id: "a82b2a8a7fc65f783a5f00fca30865fd5660c27d20ba8985f978a9336000e0ea"
	I0111 09:09:40.825387  793729 cri.go:96] found id: "7cc6dfe7ebe69d7fa2e4a83fcc9f97ca76f25f233e8dec6c17d486be7da04784"
	I0111 09:09:40.825391  793729 cri.go:96] found id: "444e2483c5a5dafffda230325a3219f14c242a9d4a210093339135b8a262b2cc"
	I0111 09:09:40.825394  793729 cri.go:96] found id: "e7c65de22a34fdcd786dca28f03d4318acafc8cc56ddf2febf531b131750a055"
	I0111 09:09:40.825398  793729 cri.go:96] found id: "59166e3edc5b1c5b88038cb476fcc1bb937cc685c07c9cc1684740b373d960e6"
	I0111 09:09:40.825401  793729 cri.go:96] found id: "d655f1b34c99b7061f83f1625edf83fdeafc1d3bd3a3df8027784d5a67499088"
	I0111 09:09:40.825405  793729 cri.go:96] found id: "6e1ee699631c60b05b3bf5f637dc3dc66eaa29e2df72af24028e423f9e31416f"
	I0111 09:09:40.825409  793729 cri.go:96] found id: "50f8850ccb505fa89954b440b9419765295b2320ecae2ea5cb7da62fd4a99f39"
	I0111 09:09:40.825414  793729 cri.go:96] found id: "c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e"
	I0111 09:09:40.825418  793729 cri.go:96] found id: "aed85adc7d903d573ee408934699b77dc8ca903cc510c2b4cdc9390e57686b60"
	I0111 09:09:40.825422  793729 cri.go:96] found id: ""
	I0111 09:09:40.825472  793729 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:09:40.848995  793729 out.go:203] 
	W0111 09:09:40.852532  793729 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:09:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:09:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 09:09:40.852558  793729 out.go:285] * 
	* 
	W0111 09:09:40.858286  793729 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 09:09:40.863986  793729 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-630626 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-630626
helpers_test.go:244: (dbg) docker inspect embed-certs-630626:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b",
	        "Created": "2026-01-11T09:07:25.16144692Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 788270,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:08:33.072356843Z",
	            "FinishedAt": "2026-01-11T09:08:32.125450667Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/hosts",
	        "LogPath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b-json.log",
	        "Name": "/embed-certs-630626",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-630626:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-630626",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b",
	                "LowerDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-630626",
	                "Source": "/var/lib/docker/volumes/embed-certs-630626/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-630626",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-630626",
	                "name.minikube.sigs.k8s.io": "embed-certs-630626",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "add34e63c63cb8ebc5a0238f61532f67092e73b371cf42b92b249c76f14edda1",
	            "SandboxKey": "/var/run/docker/netns/add34e63c63c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-630626": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:0a:31:4a:94:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45ad769942edefa5685d287911d0a8d87021dd76ee2918e11cae91d80793b700",
	                    "EndpointID": "70eecdc579b32cc19edea9431ebe64865f36b13e14328594c9674730492a677a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-630626",
	                        "25c377e6342a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-630626 -n embed-certs-630626
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-630626 -n embed-certs-630626: exit status 2 (458.742813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-630626 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-630626 logs -n 25: (1.721100365s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │                     │
	│ stop    │ -p no-preload-236664 --alsologtostderr -v=3                                                                                                                              │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │ 11 Jan 26 09:06 UTC │
	│ addons  │ enable dashboard -p no-preload-236664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ image   │ no-preload-236664 image list --format=json                                                                                                                               │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ pause   │ -p no-preload-236664 --alsologtostderr -v=1                                                                                                                              │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	│ delete  │ -p no-preload-236664                                                                                                                                                     │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ delete  │ -p no-preload-236664                                                                                                                                                     │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:08 UTC │
	│ ssh     │ force-systemd-flag-630015 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p force-systemd-flag-630015                                                                                                                                             │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p disable-driver-mounts-781777                                                                                                                                          │ disable-driver-mounts-781777 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │                     │
	│ stop    │ -p embed-certs-630626 --alsologtostderr -v=3                                                                                                                             │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-630626 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-588333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-588333 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-588333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ image   │ embed-certs-630626 image list --format=json                                                                                                                              │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ pause   │ -p embed-certs-630626 --alsologtostderr -v=1                                                                                                                             │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:09:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:09:21.951745  791650 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:09:21.951878  791650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:21.951889  791650 out.go:374] Setting ErrFile to fd 2...
	I0111 09:09:21.951894  791650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:21.952254  791650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:09:21.952682  791650 out.go:368] Setting JSON to false
	I0111 09:09:21.953645  791650 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13912,"bootTime":1768108650,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:09:21.953744  791650 start.go:143] virtualization:  
	I0111 09:09:21.956790  791650 out.go:179] * [default-k8s-diff-port-588333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:09:21.959002  791650 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:09:21.959155  791650 notify.go:221] Checking for updates...
	I0111 09:09:21.964646  791650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:09:21.967454  791650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:09:21.970309  791650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:09:21.973286  791650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:09:21.976298  791650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:09:21.979749  791650 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:09:21.980281  791650 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:09:22.011820  791650 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:09:22.011944  791650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:09:22.071997  791650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:09:22.062443306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:09:22.072111  791650 docker.go:319] overlay module found
	I0111 09:09:22.075297  791650 out.go:179] * Using the docker driver based on existing profile
	I0111 09:09:22.078316  791650 start.go:309] selected driver: docker
	I0111 09:09:22.078337  791650 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:09:22.078457  791650 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:09:22.079195  791650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:09:22.159084  791650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:09:22.149225648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:09:22.159412  791650 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:09:22.159443  791650 cni.go:84] Creating CNI manager for ""
	I0111 09:09:22.159496  791650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:09:22.159539  791650 start.go:353] cluster config:
	{Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:09:22.162760  791650 out.go:179] * Starting "default-k8s-diff-port-588333" primary control-plane node in "default-k8s-diff-port-588333" cluster
	I0111 09:09:22.165535  791650 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:09:22.168420  791650 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:09:22.171225  791650 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:09:22.171278  791650 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:09:22.171289  791650 cache.go:65] Caching tarball of preloaded images
	I0111 09:09:22.171344  791650 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:09:22.171393  791650 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:09:22.171404  791650 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 09:09:22.171509  791650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/config.json ...
	I0111 09:09:22.192002  791650 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:09:22.192025  791650 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:09:22.192045  791650 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:09:22.192076  791650 start.go:360] acquireMachinesLock for default-k8s-diff-port-588333: {Name:mk6f824bc7ba249281d1a4e0d65911b4e29ac8d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:09:22.192145  791650 start.go:364] duration metric: took 46.015µs to acquireMachinesLock for "default-k8s-diff-port-588333"
	I0111 09:09:22.192170  791650 start.go:96] Skipping create...Using existing machine configuration
	I0111 09:09:22.192177  791650 fix.go:54] fixHost starting: 
	I0111 09:09:22.192436  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:22.208854  791650 fix.go:112] recreateIfNeeded on default-k8s-diff-port-588333: state=Stopped err=<nil>
	W0111 09:09:22.208887  791650 fix.go:138] unexpected machine state, will restart: <nil>
	W0111 09:09:19.098668  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:09:21.098821  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:09:23.100306  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	I0111 09:09:25.099137  788146 pod_ready.go:94] pod "coredns-7d764666f9-x5tzj" is "Ready"
	I0111 09:09:25.099167  788146 pod_ready.go:86] duration metric: took 37.005547516s for pod "coredns-7d764666f9-x5tzj" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.102120  788146 pod_ready.go:83] waiting for pod "etcd-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.107044  788146 pod_ready.go:94] pod "etcd-embed-certs-630626" is "Ready"
	I0111 09:09:25.107120  788146 pod_ready.go:86] duration metric: took 4.944473ms for pod "etcd-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.109706  788146 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.114445  788146 pod_ready.go:94] pod "kube-apiserver-embed-certs-630626" is "Ready"
	I0111 09:09:25.114482  788146 pod_ready.go:86] duration metric: took 4.744053ms for pod "kube-apiserver-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.117276  788146 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.296625  788146 pod_ready.go:94] pod "kube-controller-manager-embed-certs-630626" is "Ready"
	I0111 09:09:25.296656  788146 pod_ready.go:86] duration metric: took 179.355363ms for pod "kube-controller-manager-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.496713  788146 pod_ready.go:83] waiting for pod "kube-proxy-7xnsq" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.897659  788146 pod_ready.go:94] pod "kube-proxy-7xnsq" is "Ready"
	I0111 09:09:25.897692  788146 pod_ready.go:86] duration metric: took 400.947814ms for pod "kube-proxy-7xnsq" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:26.098598  788146 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:26.496535  788146 pod_ready.go:94] pod "kube-scheduler-embed-certs-630626" is "Ready"
	I0111 09:09:26.496563  788146 pod_ready.go:86] duration metric: took 397.935641ms for pod "kube-scheduler-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:26.496576  788146 pod_ready.go:40] duration metric: took 38.407106802s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:09:26.557201  788146 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:09:26.560631  788146 out.go:203] 
	W0111 09:09:26.563666  788146 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:09:26.566706  788146 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:09:26.570101  788146 out.go:179] * Done! kubectl is now configured to use "embed-certs-630626" cluster and "default" namespace by default
	I0111 09:09:22.212080  791650 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-588333" ...
	I0111 09:09:22.212185  791650 cli_runner.go:164] Run: docker start default-k8s-diff-port-588333
	I0111 09:09:22.460255  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:22.481249  791650 kic.go:430] container "default-k8s-diff-port-588333" state is running.
	I0111 09:09:22.481729  791650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-588333
	I0111 09:09:22.502385  791650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/config.json ...
	I0111 09:09:22.502632  791650 machine.go:94] provisionDockerMachine start ...
	I0111 09:09:22.503472  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:22.525737  791650 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:22.527387  791650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0111 09:09:22.527411  791650 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:09:22.528173  791650 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0111 09:09:25.678067  791650 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-588333
	
	I0111 09:09:25.678099  791650 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-588333"
	I0111 09:09:25.678220  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:25.699103  791650 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:25.699468  791650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0111 09:09:25.699489  791650 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-588333 && echo "default-k8s-diff-port-588333" | sudo tee /etc/hostname
	I0111 09:09:25.860947  791650 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-588333
	
	I0111 09:09:25.861066  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:25.879116  791650 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:25.879434  791650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0111 09:09:25.879460  791650 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-588333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-588333/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-588333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:09:26.030801  791650 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:09:26.030870  791650 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:09:26.030944  791650 ubuntu.go:190] setting up certificates
	I0111 09:09:26.030974  791650 provision.go:84] configureAuth start
	I0111 09:09:26.031078  791650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-588333
	I0111 09:09:26.048890  791650 provision.go:143] copyHostCerts
	I0111 09:09:26.048962  791650 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:09:26.048971  791650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:09:26.049056  791650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:09:26.049162  791650 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:09:26.049167  791650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:09:26.049193  791650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:09:26.049305  791650 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:09:26.049310  791650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:09:26.049376  791650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:09:26.049421  791650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-588333 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-588333 localhost minikube]
	I0111 09:09:26.209259  791650 provision.go:177] copyRemoteCerts
	I0111 09:09:26.209342  791650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:09:26.209387  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:26.228735  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:26.333842  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:09:26.354071  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0111 09:09:26.371446  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 09:09:26.390188  791650 provision.go:87] duration metric: took 359.177252ms to configureAuth
	I0111 09:09:26.390263  791650 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:09:26.390478  791650 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:09:26.390607  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:26.408361  791650 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:26.408697  791650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0111 09:09:26.408721  791650 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:09:26.862248  791650 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:09:26.862270  791650 machine.go:97] duration metric: took 4.3596277s to provisionDockerMachine
	I0111 09:09:26.862281  791650 start.go:293] postStartSetup for "default-k8s-diff-port-588333" (driver="docker")
	I0111 09:09:26.862293  791650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:09:26.862353  791650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:09:26.862402  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:26.892432  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:27.011743  791650 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:09:27.017130  791650 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:09:27.017157  791650 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:09:27.017168  791650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:09:27.017223  791650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:09:27.017297  791650 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:09:27.017400  791650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:09:27.026022  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:09:27.055462  791650 start.go:296] duration metric: took 193.164156ms for postStartSetup
	I0111 09:09:27.055815  791650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:09:27.055945  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:27.073761  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:27.179627  791650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:09:27.184985  791650 fix.go:56] duration metric: took 4.992769243s for fixHost
	I0111 09:09:27.185013  791650 start.go:83] releasing machines lock for "default-k8s-diff-port-588333", held for 4.992855702s
	I0111 09:09:27.185126  791650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-588333
	I0111 09:09:27.203131  791650 ssh_runner.go:195] Run: cat /version.json
	I0111 09:09:27.203148  791650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:09:27.203185  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:27.203213  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:27.231157  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:27.238259  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:27.337615  791650 ssh_runner.go:195] Run: systemctl --version
	I0111 09:09:27.462448  791650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:09:27.501443  791650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:09:27.506285  791650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:09:27.506365  791650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:09:27.515787  791650 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 09:09:27.515815  791650 start.go:496] detecting cgroup driver to use...
	I0111 09:09:27.515847  791650 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:09:27.515898  791650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:09:27.531020  791650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:09:27.544385  791650 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:09:27.544454  791650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:09:27.559529  791650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:09:27.579711  791650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:09:27.710517  791650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:09:27.849694  791650 docker.go:234] disabling docker service ...
	I0111 09:09:27.849810  791650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:09:27.865031  791650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:09:27.878702  791650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:09:27.982547  791650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:09:28.117461  791650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:09:28.132135  791650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:09:28.148637  791650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 09:09:28.148795  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.159272  791650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:09:28.159344  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.168691  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.178617  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.188686  791650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:09:28.197392  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.206573  791650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.215480  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.224890  791650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:09:28.232816  791650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:09:28.240994  791650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:09:28.354636  791650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:09:28.528741  791650 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:09:28.528851  791650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:09:28.532701  791650 start.go:574] Will wait 60s for crictl version
	I0111 09:09:28.532787  791650 ssh_runner.go:195] Run: which crictl
	I0111 09:09:28.536153  791650 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:09:28.559429  791650 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:09:28.559577  791650 ssh_runner.go:195] Run: crio --version
	I0111 09:09:28.587275  791650 ssh_runner.go:195] Run: crio --version
	I0111 09:09:28.619438  791650 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 09:09:28.622291  791650 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-588333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:09:28.638175  791650 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 09:09:28.641834  791650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:09:28.651336  791650 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:09:28.651458  791650 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:09:28.651520  791650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:09:28.695891  791650 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:09:28.695914  791650 crio.go:433] Images already preloaded, skipping extraction
	I0111 09:09:28.695975  791650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:09:28.720523  791650 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:09:28.720550  791650 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:09:28.720558  791650 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I0111 09:09:28.720665  791650 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-588333 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:09:28.720754  791650 ssh_runner.go:195] Run: crio config
	I0111 09:09:28.791018  791650 cni.go:84] Creating CNI manager for ""
	I0111 09:09:28.791044  791650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:09:28.791065  791650 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:09:28.791092  791650 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-588333 NodeName:default-k8s-diff-port-588333 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:09:28.791221  791650 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-588333"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:09:28.791294  791650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 09:09:28.800349  791650 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:09:28.800418  791650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:09:28.807958  791650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0111 09:09:28.821029  791650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:09:28.833944  791650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I0111 09:09:28.846553  791650 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:09:28.849973  791650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:09:28.859352  791650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:09:28.980078  791650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:09:28.996864  791650 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333 for IP: 192.168.76.2
	I0111 09:09:28.996891  791650 certs.go:195] generating shared ca certs ...
	I0111 09:09:28.996908  791650 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:28.997137  791650 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:09:28.997208  791650 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:09:28.997223  791650 certs.go:257] generating profile certs ...
	I0111 09:09:28.997365  791650 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/client.key
	I0111 09:09:28.997467  791650 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/apiserver.key.04b53819
	I0111 09:09:28.997575  791650 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/proxy-client.key
	I0111 09:09:28.997736  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:09:28.997786  791650 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:09:28.997815  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:09:28.997855  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:09:28.997898  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:09:28.997945  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:09:28.998019  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:09:28.998822  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:09:29.017911  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:09:29.037882  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:09:29.055894  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:09:29.073010  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0111 09:09:29.098046  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 09:09:29.117492  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:09:29.135566  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 09:09:29.154676  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:09:29.181160  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:09:29.207960  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:09:29.237679  791650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:09:29.257268  791650 ssh_runner.go:195] Run: openssl version
	I0111 09:09:29.266261  791650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:09:29.275971  791650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:09:29.288738  791650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:09:29.292732  791650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:09:29.292851  791650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:09:29.337216  791650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:09:29.344632  791650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:09:29.353065  791650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:09:29.360538  791650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:09:29.364255  791650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:09:29.364346  791650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:09:29.405648  791650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:09:29.413104  791650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:09:29.421414  791650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:09:29.429132  791650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:09:29.433002  791650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:09:29.433107  791650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:09:29.474697  791650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:09:29.482684  791650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:09:29.486584  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 09:09:29.527897  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 09:09:29.569134  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 09:09:29.611091  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 09:09:29.670651  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 09:09:29.721150  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 09:09:29.784729  791650 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:09:29.784879  791650 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:09:29.784989  791650 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:09:29.867596  791650 cri.go:96] found id: "e7c36bd895a088cee8bf01ae0ac34e5e8eb26713282675fc6d4788401b926477"
	I0111 09:09:29.867668  791650 cri.go:96] found id: "076f1fdf555e06139c3c03315dc96b76587a0287090e0f05c5db8be14ea7a439"
	I0111 09:09:29.867696  791650 cri.go:96] found id: "2ae07275c0ab7a01e2063bff151242ed44a810e62093e55ae36786b9db6a2095"
	I0111 09:09:29.867740  791650 cri.go:96] found id: "6f627745c3daad695d0b29049d2cbdb0651dcdbf59d1dfadfe4715bf0735f857"
	I0111 09:09:29.867762  791650 cri.go:96] found id: ""
	I0111 09:09:29.867857  791650 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 09:09:29.883104  791650 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:09:29Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:09:29.883254  791650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:09:29.903552  791650 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 09:09:29.903639  791650 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 09:09:29.903731  791650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 09:09:29.913213  791650 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 09:09:29.914212  791650 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-588333" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:09:29.914851  791650 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-575040/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-588333" cluster setting kubeconfig missing "default-k8s-diff-port-588333" context setting]
	I0111 09:09:29.919146  791650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:29.925295  791650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 09:09:29.941053  791650 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0111 09:09:29.941093  791650 kubeadm.go:602] duration metric: took 37.434409ms to restartPrimaryControlPlane
	I0111 09:09:29.941104  791650 kubeadm.go:403] duration metric: took 156.384319ms to StartCluster
	I0111 09:09:29.941122  791650 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:29.941202  791650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:09:29.942777  791650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:29.943037  791650 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:09:29.943248  791650 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:09:29.943294  791650 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:09:29.943363  791650 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-588333"
	I0111 09:09:29.943376  791650 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-588333"
	W0111 09:09:29.943382  791650 addons.go:248] addon storage-provisioner should already be in state true
	I0111 09:09:29.943404  791650 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:09:29.943863  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:29.944324  791650 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-588333"
	I0111 09:09:29.944454  791650 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-588333"
	W0111 09:09:29.944481  791650 addons.go:248] addon dashboard should already be in state true
	I0111 09:09:29.944534  791650 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:09:29.945037  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:29.944379  791650 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-588333"
	I0111 09:09:29.948385  791650 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-588333"
	I0111 09:09:29.948749  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:29.949379  791650 out.go:179] * Verifying Kubernetes components...
	I0111 09:09:29.954544  791650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:09:30.020357  791650 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 09:09:30.020478  791650 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:09:30.021837  791650 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-588333"
	W0111 09:09:30.021861  791650 addons.go:248] addon default-storageclass should already be in state true
	I0111 09:09:30.021892  791650 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:09:30.024438  791650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:09:30.024474  791650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:09:30.024498  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:30.024529  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:30.027390  791650 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 09:09:30.030413  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 09:09:30.030458  791650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 09:09:30.030547  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:30.078439  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:30.088728  791650 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:09:30.088753  791650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:09:30.088839  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:30.096867  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:30.127866  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:30.366833  791650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:09:30.371693  791650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:09:30.377144  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 09:09:30.377164  791650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 09:09:30.396794  791650 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-588333" to be "Ready" ...
	I0111 09:09:30.410192  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 09:09:30.410230  791650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 09:09:30.424991  791650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:09:30.444209  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 09:09:30.444235  791650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 09:09:30.516541  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 09:09:30.516566  791650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 09:09:30.583600  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 09:09:30.583626  791650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 09:09:30.627947  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 09:09:30.627973  791650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 09:09:30.643433  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 09:09:30.643458  791650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 09:09:30.660768  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 09:09:30.660795  791650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 09:09:30.677115  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:09:30.677141  791650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 09:09:30.692002  791650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:09:33.077837  791650 node_ready.go:49] node "default-k8s-diff-port-588333" is "Ready"
	I0111 09:09:33.077871  791650 node_ready.go:38] duration metric: took 2.68104583s for node "default-k8s-diff-port-588333" to be "Ready" ...
	I0111 09:09:33.077886  791650 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:09:33.077947  791650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:09:34.832124  791650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.460395496s)
	I0111 09:09:34.832221  791650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.407201442s)
	I0111 09:09:34.832319  791650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.140285683s)
	I0111 09:09:34.832343  791650 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.754381812s)
	I0111 09:09:34.832734  791650 api_server.go:72] duration metric: took 4.889669771s to wait for apiserver process to appear ...
	I0111 09:09:34.832743  791650 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:09:34.832758  791650 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0111 09:09:34.835618  791650 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-588333 addons enable metrics-server
	
	I0111 09:09:34.843614  791650 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0111 09:09:34.846164  791650 api_server.go:141] control plane version: v1.35.0
	I0111 09:09:34.846239  791650 api_server.go:131] duration metric: took 13.479303ms to wait for apiserver health ...
	I0111 09:09:34.846264  791650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:09:34.850010  791650 system_pods.go:59] 8 kube-system pods found
	I0111 09:09:34.850099  791650 system_pods.go:61] "coredns-7d764666f9-2lh6p" [54a6cea1-73a3-4ca6-bd7a-afbbac903c9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:09:34.850153  791650 system_pods.go:61] "etcd-default-k8s-diff-port-588333" [ac8ac94a-7e8c-4899-98e5-a36f9dcaa48c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:09:34.850183  791650 system_pods.go:61] "kindnet-8pg22" [d8bfcb3a-747f-4072-9916-be69d991bcea] Running
	I0111 09:09:34.850209  791650 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-588333" [caae5ef6-ad07-477b-904c-95d13dd2c926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:09:34.850244  791650 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-588333" [e8274e7b-a729-43ee-8e0a-c9f156d0bdca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:09:34.850270  791650 system_pods.go:61] "kube-proxy-g4x2l" [23972631-486c-42e5-a029-569447059d31] Running
	I0111 09:09:34.850293  791650 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-588333" [0d5718df-db89-41b2-9cb6-c52b1c63fa5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:09:34.850327  791650 system_pods.go:61] "storage-provisioner" [acdfb8c3-6907-4ce4-b95f-2369474a2ece] Running
	I0111 09:09:34.850352  791650 system_pods.go:74] duration metric: took 4.067473ms to wait for pod list to return data ...
	I0111 09:09:34.850375  791650 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:09:34.851568  791650 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0111 09:09:34.853910  791650 default_sa.go:45] found service account: "default"
	I0111 09:09:34.853972  791650 default_sa.go:55] duration metric: took 3.564206ms for default service account to be created ...
	I0111 09:09:34.853997  791650 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:09:34.855142  791650 addons.go:530] duration metric: took 4.911849387s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0111 09:09:34.857256  791650 system_pods.go:86] 8 kube-system pods found
	I0111 09:09:34.857326  791650 system_pods.go:89] "coredns-7d764666f9-2lh6p" [54a6cea1-73a3-4ca6-bd7a-afbbac903c9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:09:34.857351  791650 system_pods.go:89] "etcd-default-k8s-diff-port-588333" [ac8ac94a-7e8c-4899-98e5-a36f9dcaa48c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:09:34.857394  791650 system_pods.go:89] "kindnet-8pg22" [d8bfcb3a-747f-4072-9916-be69d991bcea] Running
	I0111 09:09:34.857420  791650 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-588333" [caae5ef6-ad07-477b-904c-95d13dd2c926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:09:34.857444  791650 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-588333" [e8274e7b-a729-43ee-8e0a-c9f156d0bdca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:09:34.857482  791650 system_pods.go:89] "kube-proxy-g4x2l" [23972631-486c-42e5-a029-569447059d31] Running
	I0111 09:09:34.857509  791650 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-588333" [0d5718df-db89-41b2-9cb6-c52b1c63fa5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:09:34.857530  791650 system_pods.go:89] "storage-provisioner" [acdfb8c3-6907-4ce4-b95f-2369474a2ece] Running
	I0111 09:09:34.857566  791650 system_pods.go:126] duration metric: took 3.549855ms to wait for k8s-apps to be running ...
	I0111 09:09:34.857592  791650 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:09:34.857677  791650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:09:34.873079  791650 system_svc.go:56] duration metric: took 15.478509ms WaitForService to wait for kubelet
	I0111 09:09:34.873158  791650 kubeadm.go:587] duration metric: took 4.930093264s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:09:34.873191  791650 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:09:34.876388  791650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:09:34.876474  791650 node_conditions.go:123] node cpu capacity is 2
	I0111 09:09:34.876521  791650 node_conditions.go:105] duration metric: took 3.307628ms to run NodePressure ...
	I0111 09:09:34.876548  791650 start.go:242] waiting for startup goroutines ...
	I0111 09:09:34.876584  791650 start.go:247] waiting for cluster config update ...
	I0111 09:09:34.876614  791650 start.go:256] writing updated cluster config ...
	I0111 09:09:34.876949  791650 ssh_runner.go:195] Run: rm -f paused
	I0111 09:09:34.880550  791650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:09:34.884862  791650 pod_ready.go:83] waiting for pod "coredns-7d764666f9-2lh6p" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 09:09:36.890832  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 11 09:09:17 embed-certs-630626 crio[663]: time="2026-01-11T09:09:17.895813225Z" level=info msg="Started container" PID=1699 containerID=5433957fc000a476a42994d946d0e7a7cd56580b449b098078502bf7e619aca2 description=kube-system/storage-provisioner/storage-provisioner id=74bf37a4-40c5-4da7-885f-ebf0f01f30e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=99c7c68acacbab5d1ac33330b8e951fff1b9ee53aa022c69d0eef1c1fdd249ad
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.573267953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.573674063Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.579514417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.579806663Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.59221351Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.592245978Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.598890256Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.598996456Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.599027669Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.603326678Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.603359286Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.664777693Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=da0c32f5-bae8-4da1-a8ab-5b8fd82f91a9 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.666082563Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6a09b385-d11f-446f-9098-af608222ea90 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.667161273Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p/dashboard-metrics-scraper" id=a2aa3b92-68a1-4540-8e22-5645a8ec56fe name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.667298275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.674232502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.675245463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.69964425Z" level=info msg="Created container c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p/dashboard-metrics-scraper" id=a2aa3b92-68a1-4540-8e22-5645a8ec56fe name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.701993236Z" level=info msg="Starting container: c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e" id=79314b50-bcb0-4ad3-bde0-77e1cd355dc4 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.705141962Z" level=info msg="Started container" PID=1785 containerID=c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p/dashboard-metrics-scraper id=79314b50-bcb0-4ad3-bde0-77e1cd355dc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=59c8d502bfe5c479afc6b06de52ac2a4b3088261d6a5bad1d877ccbe6e78b897
	Jan 11 09:09:35 embed-certs-630626 conmon[1783]: conmon c12b2180c9cbc5cb5860 <ninfo>: container 1785 exited with status 1
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.914342746Z" level=info msg="Removing container: d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3" id=122b12c3-cd0f-4e39-9786-6a6c42d14aef name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.923287132Z" level=info msg="Error loading conmon cgroup of container d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3: cgroup deleted" id=122b12c3-cd0f-4e39-9786-6a6c42d14aef name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.926337526Z" level=info msg="Removed container d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p/dashboard-metrics-scraper" id=122b12c3-cd0f-4e39-9786-6a6c42d14aef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c12b2180c9cbc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago        Exited              dashboard-metrics-scraper   3                   59c8d502bfe5c       dashboard-metrics-scraper-867fb5f87b-x8s5p   kubernetes-dashboard
	5433957fc000a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   99c7c68acacba       storage-provisioner                          kube-system
	aed85adc7d903       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   ff97040e673ff       kubernetes-dashboard-b84665fb8-wpbkc         kubernetes-dashboard
	a82b2a8a7fc65       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           55 seconds ago       Running             coredns                     1                   472c7eaeaf8db       coredns-7d764666f9-x5tzj                     kube-system
	fea5d632e5f45       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   cce5a649e60c6       busybox                                      default
	7cc6dfe7ebe69       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   99c7c68acacba       storage-provisioner                          kube-system
	444e2483c5a5d       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           55 seconds ago       Running             kube-proxy                  1                   be084ecc79684       kube-proxy-7xnsq                             kube-system
	e7c65de22a34f       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   61f49c7d69c97       kindnet-w5nb5                                kube-system
	59166e3edc5b1       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   529ffd9f35d84       kube-scheduler-embed-certs-630626            kube-system
	d655f1b34c99b       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   cbdb13b39be3a       kube-apiserver-embed-certs-630626            kube-system
	6e1ee699631c6       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   44ff1c72fd931       etcd-embed-certs-630626                      kube-system
	50f8850ccb505       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   91a2d9255c418       kube-controller-manager-embed-certs-630626   kube-system
	
	
	==> coredns [a82b2a8a7fc65f783a5f00fca30865fd5660c27d20ba8985f978a9336000e0ea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37777 - 90 "HINFO IN 6778001374049786220.61113177844581166. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.012620339s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-630626
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-630626
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=embed-certs-630626
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_07_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-630626
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:09:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:09:17 +0000   Sun, 11 Jan 2026 09:07:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:09:17 +0000   Sun, 11 Jan 2026 09:07:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:09:17 +0000   Sun, 11 Jan 2026 09:07:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:09:17 +0000   Sun, 11 Jan 2026 09:08:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-630626
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                c5657d65-a5db-44ef-92ca-1ef6faf268e8
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-x5tzj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-embed-certs-630626                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-w5nb5                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-embed-certs-630626             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-630626    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-7xnsq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-embed-certs-630626             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-x8s5p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-wpbkc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node embed-certs-630626 event: Registered Node embed-certs-630626 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node embed-certs-630626 event: Registered Node embed-certs-630626 in Controller
	
	
	==> dmesg <==
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	[Jan11 09:08] overlayfs: idmapped layers are currently not supported
	[ +12.491892] overlayfs: idmapped layers are currently not supported
	[Jan11 09:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6e1ee699631c60b05b3bf5f637dc3dc66eaa29e2df72af24028e423f9e31416f] <==
	{"level":"info","ts":"2026-01-11T09:08:41.919556Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:08:41.976617Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:08:41.976534Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T09:08:41.976601Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T09:08:42.014900Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-11T09:08:42.015032Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T09:08:42.015121Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T09:08:42.093224Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:42.093372Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:42.093515Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:42.093530Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:08:42.093546Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:08:42.127688Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:08:42.127764Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:08:42.127789Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:08:42.127819Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:08:42.149859Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:08:42.159483Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:08:42.159561Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:08:42.165267Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:08:42.149711Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-630626 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:08:42.165983Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:08:42.216064Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:08:42.333337Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T09:08:42.334112Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:09:42 up  3:52,  0 user,  load average: 3.16, 2.15, 2.01
	Linux embed-certs-630626 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e7c65de22a34fdcd786dca28f03d4318acafc8cc56ddf2febf531b131750a055] <==
	I0111 09:08:47.367938       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:08:47.368163       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:08:47.368298       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:08:47.368310       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:08:47.368320       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:08:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:08:47.566655       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:08:47.566674       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:08:47.566681       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:08:47.566970       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0111 09:09:17.569224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0111 09:09:17.569228       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0111 09:09:17.569397       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0111 09:09:17.569448       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0111 09:09:18.966771       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:09:18.966893       1 metrics.go:72] Registering metrics
	I0111 09:09:18.966980       1 controller.go:711] "Syncing nftables rules"
	I0111 09:09:27.566265       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:09:27.566930       1 main.go:301] handling current node
	I0111 09:09:37.569132       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:09:37.569207       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d655f1b34c99b7061f83f1625edf83fdeafc1d3bd3a3df8027784d5a67499088] <==
	I0111 09:08:46.499582       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 09:08:46.499702       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:46.500316       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 09:08:46.503454       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:46.504567       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 09:08:46.506368       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 09:08:46.507004       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 09:08:46.525878       1 aggregator.go:187] initial CRD sync complete...
	I0111 09:08:46.525974       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 09:08:46.526005       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 09:08:46.526034       1 cache.go:39] Caches are synced for autoregister controller
	I0111 09:08:46.527298       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 09:08:46.536577       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0111 09:08:46.590960       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 09:08:46.675966       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:08:46.996763       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:08:47.533757       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:08:47.626349       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:08:47.681046       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:08:47.698635       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:08:47.894411       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.142.184"}
	I0111 09:08:47.946640       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.115.244"}
	I0111 09:08:49.804529       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 09:08:49.905926       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 09:08:50.005947       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [50f8850ccb505fa89954b440b9419765295b2320ecae2ea5cb7da62fd4a99f39] <==
	I0111 09:08:49.355600       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.355677       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.354458       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.354840       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.355040       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.355026       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.355095       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.356681       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.357719       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.358104       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.358185       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.358403       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.359071       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.359526       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-630626"
	I0111 09:08:49.359589       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0111 09:08:49.355090       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.363079       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.364159       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.364191       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.371018       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:08:49.383842       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.455205       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.455231       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:08:49.455237       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:08:49.472091       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [444e2483c5a5dafffda230325a3219f14c242a9d4a210093339135b8a262b2cc] <==
	I0111 09:08:47.777089       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:08:47.995798       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:08:48.096694       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:48.096726       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 09:08:48.096798       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:08:48.117264       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:08:48.117388       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:08:48.121425       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:08:48.122034       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:08:48.122162       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:08:48.126558       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:08:48.126597       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:08:48.126764       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:08:48.126803       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:08:48.126878       1 config.go:200] "Starting service config controller"
	I0111 09:08:48.126896       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:08:48.126915       1 config.go:309] "Starting node config controller"
	I0111 09:08:48.126919       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:08:48.227294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 09:08:48.227431       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:08:48.227479       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:08:48.227499       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [59166e3edc5b1c5b88038cb476fcc1bb937cc685c07c9cc1684740b373d960e6] <==
	I0111 09:08:43.963051       1 serving.go:386] Generated self-signed cert in-memory
	W0111 09:08:46.367997       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 09:08:46.374246       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:08:46.374282       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 09:08:46.374289       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 09:08:46.466883       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 09:08:46.466921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:08:46.474760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 09:08:46.474873       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:08:46.474892       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:08:46.474910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 09:08:46.576176       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:09:02 embed-certs-630626 kubelet[793]: I0111 09:09:02.822757     793 scope.go:122] "RemoveContainer" containerID="95603203503cb5d2056ca1af15b778734b13181a4fb8bd9184ba3b904b7dd8b5"
	Jan 11 09:09:02 embed-certs-630626 kubelet[793]: E0111 09:09:02.823001     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:10 embed-certs-630626 kubelet[793]: E0111 09:09:10.477587     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:10 embed-certs-630626 kubelet[793]: I0111 09:09:10.477636     793 scope.go:122] "RemoveContainer" containerID="95603203503cb5d2056ca1af15b778734b13181a4fb8bd9184ba3b904b7dd8b5"
	Jan 11 09:09:10 embed-certs-630626 kubelet[793]: E0111 09:09:10.477819     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: E0111 09:09:11.664880     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: I0111 09:09:11.664931     793 scope.go:122] "RemoveContainer" containerID="95603203503cb5d2056ca1af15b778734b13181a4fb8bd9184ba3b904b7dd8b5"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: I0111 09:09:11.846402     793 scope.go:122] "RemoveContainer" containerID="95603203503cb5d2056ca1af15b778734b13181a4fb8bd9184ba3b904b7dd8b5"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: E0111 09:09:11.846720     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: I0111 09:09:11.846748     793 scope.go:122] "RemoveContainer" containerID="d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: E0111 09:09:11.846899     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:17 embed-certs-630626 kubelet[793]: I0111 09:09:17.863733     793 scope.go:122] "RemoveContainer" containerID="7cc6dfe7ebe69d7fa2e4a83fcc9f97ca76f25f233e8dec6c17d486be7da04784"
	Jan 11 09:09:20 embed-certs-630626 kubelet[793]: E0111 09:09:20.478513     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:20 embed-certs-630626 kubelet[793]: I0111 09:09:20.478566     793 scope.go:122] "RemoveContainer" containerID="d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3"
	Jan 11 09:09:20 embed-certs-630626 kubelet[793]: E0111 09:09:20.478740     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:24 embed-certs-630626 kubelet[793]: E0111 09:09:24.993113     793 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-x5tzj" containerName="coredns"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: E0111 09:09:35.664238     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: I0111 09:09:35.664288     793 scope.go:122] "RemoveContainer" containerID="d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: I0111 09:09:35.910510     793 scope.go:122] "RemoveContainer" containerID="d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: E0111 09:09:35.911128     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: I0111 09:09:35.911459     793 scope.go:122] "RemoveContainer" containerID="c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: E0111 09:09:35.911741     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:39 embed-certs-630626 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:09:39 embed-certs-630626 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:09:39 embed-certs-630626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [aed85adc7d903d573ee408934699b77dc8ca903cc510c2b4cdc9390e57686b60] <==
	2026/01/11 09:08:55 Starting overwatch
	2026/01/11 09:08:55 Using namespace: kubernetes-dashboard
	2026/01/11 09:08:55 Using in-cluster config to connect to apiserver
	2026/01/11 09:08:55 Using secret token for csrf signing
	2026/01/11 09:08:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 09:08:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 09:08:55 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 09:08:55 Generating JWE encryption key
	2026/01/11 09:08:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 09:08:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 09:08:56 Initializing JWE encryption key from synchronized object
	2026/01/11 09:08:56 Creating in-cluster Sidecar client
	2026/01/11 09:08:56 Serving insecurely on HTTP port: 9090
	2026/01/11 09:08:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:09:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5433957fc000a476a42994d946d0e7a7cd56580b449b098078502bf7e619aca2] <==
	I0111 09:09:17.911160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:09:17.924516       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:09:17.924663       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 09:09:17.927427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:21.383103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:25.643830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:29.242490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:32.296964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:35.319323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:35.326711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:09:35.326874       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:09:35.327116       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-630626_3264393d-914f-4f6c-81a8-aba39890042d!
	I0111 09:09:35.334213       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e8d4ca1-b478-4fe9-ac57-5e4f0fb583ee", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-630626_3264393d-914f-4f6c-81a8-aba39890042d became leader
	W0111 09:09:35.335050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:35.338180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:09:35.429005       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-630626_3264393d-914f-4f6c-81a8-aba39890042d!
	W0111 09:09:37.340949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:37.349027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:39.353123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:39.361219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:41.373621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:41.387685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7cc6dfe7ebe69d7fa2e4a83fcc9f97ca76f25f233e8dec6c17d486be7da04784] <==
	I0111 09:08:47.459123       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 09:09:17.466958       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-630626 -n embed-certs-630626
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-630626 -n embed-certs-630626: exit status 2 (473.619803ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-630626 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-630626
helpers_test.go:244: (dbg) docker inspect embed-certs-630626:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b",
	        "Created": "2026-01-11T09:07:25.16144692Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 788270,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:08:33.072356843Z",
	            "FinishedAt": "2026-01-11T09:08:32.125450667Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/hosts",
	        "LogPath": "/var/lib/docker/containers/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b/25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b-json.log",
	        "Name": "/embed-certs-630626",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-630626:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-630626",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "25c377e6342aae4d5305ebb1372ca8674d8605656dd915b3cffa99e3085dbc8b",
	                "LowerDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7fc45b1fcb57b15f0cc509ef006284c6ec8846193d1f6371d66840b980705ea4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-630626",
	                "Source": "/var/lib/docker/volumes/embed-certs-630626/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-630626",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-630626",
	                "name.minikube.sigs.k8s.io": "embed-certs-630626",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "add34e63c63cb8ebc5a0238f61532f67092e73b371cf42b92b249c76f14edda1",
	            "SandboxKey": "/var/run/docker/netns/add34e63c63c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-630626": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:0a:31:4a:94:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45ad769942edefa5685d287911d0a8d87021dd76ee2918e11cae91d80793b700",
	                    "EndpointID": "70eecdc579b32cc19edea9431ebe64865f36b13e14328594c9674730492a677a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-630626",
	                        "25c377e6342a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-630626 -n embed-certs-630626
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-630626 -n embed-certs-630626: exit status 2 (475.33185ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-630626 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-630626 logs -n 25: (1.931962773s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-931581                                                                                                                                                │ old-k8s-version-931581       │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:04 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:04 UTC │ 11 Jan 26 09:05 UTC │
	│ addons  │ enable metrics-server -p no-preload-236664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │                     │
	│ stop    │ -p no-preload-236664 --alsologtostderr -v=3                                                                                                                              │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:05 UTC │ 11 Jan 26 09:06 UTC │
	│ addons  │ enable dashboard -p no-preload-236664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ image   │ no-preload-236664 image list --format=json                                                                                                                               │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ pause   │ -p no-preload-236664 --alsologtostderr -v=1                                                                                                                              │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	│ delete  │ -p no-preload-236664                                                                                                                                                     │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ delete  │ -p no-preload-236664                                                                                                                                                     │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:08 UTC │
	│ ssh     │ force-systemd-flag-630015 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p force-systemd-flag-630015                                                                                                                                             │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p disable-driver-mounts-781777                                                                                                                                          │ disable-driver-mounts-781777 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │                     │
	│ stop    │ -p embed-certs-630626 --alsologtostderr -v=3                                                                                                                             │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-630626 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-588333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-588333 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-588333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ image   │ embed-certs-630626 image list --format=json                                                                                                                              │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ pause   │ -p embed-certs-630626 --alsologtostderr -v=1                                                                                                                             │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:09:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:09:21.951745  791650 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:09:21.951878  791650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:21.951889  791650 out.go:374] Setting ErrFile to fd 2...
	I0111 09:09:21.951894  791650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:21.952254  791650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:09:21.952682  791650 out.go:368] Setting JSON to false
	I0111 09:09:21.953645  791650 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13912,"bootTime":1768108650,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:09:21.953744  791650 start.go:143] virtualization:  
	I0111 09:09:21.956790  791650 out.go:179] * [default-k8s-diff-port-588333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:09:21.959002  791650 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:09:21.959155  791650 notify.go:221] Checking for updates...
	I0111 09:09:21.964646  791650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:09:21.967454  791650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:09:21.970309  791650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:09:21.973286  791650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:09:21.976298  791650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:09:21.979749  791650 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:09:21.980281  791650 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:09:22.011820  791650 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:09:22.011944  791650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:09:22.071997  791650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:09:22.062443306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:09:22.072111  791650 docker.go:319] overlay module found
	I0111 09:09:22.075297  791650 out.go:179] * Using the docker driver based on existing profile
	I0111 09:09:22.078316  791650 start.go:309] selected driver: docker
	I0111 09:09:22.078337  791650 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:09:22.078457  791650 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:09:22.079195  791650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:09:22.159084  791650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:09:22.149225648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:09:22.159412  791650 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:09:22.159443  791650 cni.go:84] Creating CNI manager for ""
	I0111 09:09:22.159496  791650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:09:22.159539  791650 start.go:353] cluster config:
	{Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:09:22.162760  791650 out.go:179] * Starting "default-k8s-diff-port-588333" primary control-plane node in "default-k8s-diff-port-588333" cluster
	I0111 09:09:22.165535  791650 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:09:22.168420  791650 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:09:22.171225  791650 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:09:22.171278  791650 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:09:22.171289  791650 cache.go:65] Caching tarball of preloaded images
	I0111 09:09:22.171344  791650 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:09:22.171393  791650 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:09:22.171404  791650 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 09:09:22.171509  791650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/config.json ...
	I0111 09:09:22.192002  791650 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:09:22.192025  791650 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:09:22.192045  791650 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:09:22.192076  791650 start.go:360] acquireMachinesLock for default-k8s-diff-port-588333: {Name:mk6f824bc7ba249281d1a4e0d65911b4e29ac8d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:09:22.192145  791650 start.go:364] duration metric: took 46.015µs to acquireMachinesLock for "default-k8s-diff-port-588333"
	I0111 09:09:22.192170  791650 start.go:96] Skipping create...Using existing machine configuration
	I0111 09:09:22.192177  791650 fix.go:54] fixHost starting: 
	I0111 09:09:22.192436  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:22.208854  791650 fix.go:112] recreateIfNeeded on default-k8s-diff-port-588333: state=Stopped err=<nil>
	W0111 09:09:22.208887  791650 fix.go:138] unexpected machine state, will restart: <nil>
	W0111 09:09:19.098668  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:09:21.098821  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	W0111 09:09:23.100306  788146 pod_ready.go:104] pod "coredns-7d764666f9-x5tzj" is not "Ready", error: <nil>
	I0111 09:09:25.099137  788146 pod_ready.go:94] pod "coredns-7d764666f9-x5tzj" is "Ready"
	I0111 09:09:25.099167  788146 pod_ready.go:86] duration metric: took 37.005547516s for pod "coredns-7d764666f9-x5tzj" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.102120  788146 pod_ready.go:83] waiting for pod "etcd-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.107044  788146 pod_ready.go:94] pod "etcd-embed-certs-630626" is "Ready"
	I0111 09:09:25.107120  788146 pod_ready.go:86] duration metric: took 4.944473ms for pod "etcd-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.109706  788146 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.114445  788146 pod_ready.go:94] pod "kube-apiserver-embed-certs-630626" is "Ready"
	I0111 09:09:25.114482  788146 pod_ready.go:86] duration metric: took 4.744053ms for pod "kube-apiserver-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.117276  788146 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.296625  788146 pod_ready.go:94] pod "kube-controller-manager-embed-certs-630626" is "Ready"
	I0111 09:09:25.296656  788146 pod_ready.go:86] duration metric: took 179.355363ms for pod "kube-controller-manager-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.496713  788146 pod_ready.go:83] waiting for pod "kube-proxy-7xnsq" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:25.897659  788146 pod_ready.go:94] pod "kube-proxy-7xnsq" is "Ready"
	I0111 09:09:25.897692  788146 pod_ready.go:86] duration metric: took 400.947814ms for pod "kube-proxy-7xnsq" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:26.098598  788146 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:26.496535  788146 pod_ready.go:94] pod "kube-scheduler-embed-certs-630626" is "Ready"
	I0111 09:09:26.496563  788146 pod_ready.go:86] duration metric: took 397.935641ms for pod "kube-scheduler-embed-certs-630626" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:09:26.496576  788146 pod_ready.go:40] duration metric: took 38.407106802s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:09:26.557201  788146 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:09:26.560631  788146 out.go:203] 
	W0111 09:09:26.563666  788146 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:09:26.566706  788146 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:09:26.570101  788146 out.go:179] * Done! kubectl is now configured to use "embed-certs-630626" cluster and "default" namespace by default
	I0111 09:09:22.212080  791650 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-588333" ...
	I0111 09:09:22.212185  791650 cli_runner.go:164] Run: docker start default-k8s-diff-port-588333
	I0111 09:09:22.460255  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:22.481249  791650 kic.go:430] container "default-k8s-diff-port-588333" state is running.
	I0111 09:09:22.481729  791650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-588333
	I0111 09:09:22.502385  791650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/config.json ...
	I0111 09:09:22.502632  791650 machine.go:94] provisionDockerMachine start ...
	I0111 09:09:22.503472  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:22.525737  791650 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:22.527387  791650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0111 09:09:22.527411  791650 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:09:22.528173  791650 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0111 09:09:25.678067  791650 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-588333
	
	I0111 09:09:25.678099  791650 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-588333"
	I0111 09:09:25.678220  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:25.699103  791650 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:25.699468  791650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0111 09:09:25.699489  791650 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-588333 && echo "default-k8s-diff-port-588333" | sudo tee /etc/hostname
	I0111 09:09:25.860947  791650 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-588333
	
	I0111 09:09:25.861066  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:25.879116  791650 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:25.879434  791650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0111 09:09:25.879460  791650 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-588333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-588333/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-588333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 09:09:26.030801  791650 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:09:26.030870  791650 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:09:26.030944  791650 ubuntu.go:190] setting up certificates
	I0111 09:09:26.030974  791650 provision.go:84] configureAuth start
	I0111 09:09:26.031078  791650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-588333
	I0111 09:09:26.048890  791650 provision.go:143] copyHostCerts
	I0111 09:09:26.048962  791650 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:09:26.048971  791650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:09:26.049056  791650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:09:26.049162  791650 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:09:26.049167  791650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:09:26.049193  791650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:09:26.049305  791650 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:09:26.049310  791650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:09:26.049376  791650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:09:26.049421  791650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-588333 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-588333 localhost minikube]
	I0111 09:09:26.209259  791650 provision.go:177] copyRemoteCerts
	I0111 09:09:26.209342  791650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:09:26.209387  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:26.228735  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:26.333842  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:09:26.354071  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0111 09:09:26.371446  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 09:09:26.390188  791650 provision.go:87] duration metric: took 359.177252ms to configureAuth
	I0111 09:09:26.390263  791650 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:09:26.390478  791650 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:09:26.390607  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:26.408361  791650 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:26.408697  791650 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I0111 09:09:26.408721  791650 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:09:26.862248  791650 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:09:26.862270  791650 machine.go:97] duration metric: took 4.3596277s to provisionDockerMachine
	I0111 09:09:26.862281  791650 start.go:293] postStartSetup for "default-k8s-diff-port-588333" (driver="docker")
	I0111 09:09:26.862293  791650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:09:26.862353  791650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:09:26.862402  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:26.892432  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:27.011743  791650 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:09:27.017130  791650 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:09:27.017157  791650 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:09:27.017168  791650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:09:27.017223  791650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:09:27.017297  791650 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:09:27.017400  791650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:09:27.026022  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:09:27.055462  791650 start.go:296] duration metric: took 193.164156ms for postStartSetup
	I0111 09:09:27.055815  791650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:09:27.055945  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:27.073761  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:27.179627  791650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:09:27.184985  791650 fix.go:56] duration metric: took 4.992769243s for fixHost
	I0111 09:09:27.185013  791650 start.go:83] releasing machines lock for "default-k8s-diff-port-588333", held for 4.992855702s
	I0111 09:09:27.185126  791650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-588333
	I0111 09:09:27.203131  791650 ssh_runner.go:195] Run: cat /version.json
	I0111 09:09:27.203148  791650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:09:27.203185  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:27.203213  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:27.231157  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:27.238259  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:27.337615  791650 ssh_runner.go:195] Run: systemctl --version
	I0111 09:09:27.462448  791650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:09:27.501443  791650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:09:27.506285  791650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:09:27.506365  791650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:09:27.515787  791650 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0111 09:09:27.515815  791650 start.go:496] detecting cgroup driver to use...
	I0111 09:09:27.515847  791650 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:09:27.515898  791650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:09:27.531020  791650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:09:27.544385  791650 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:09:27.544454  791650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:09:27.559529  791650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:09:27.579711  791650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:09:27.710517  791650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:09:27.849694  791650 docker.go:234] disabling docker service ...
	I0111 09:09:27.849810  791650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:09:27.865031  791650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:09:27.878702  791650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:09:27.982547  791650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:09:28.117461  791650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:09:28.132135  791650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:09:28.148637  791650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 09:09:28.148795  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.159272  791650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:09:28.159344  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.168691  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.178617  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.188686  791650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:09:28.197392  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.206573  791650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.215480  791650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:09:28.224890  791650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:09:28.232816  791650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:09:28.240994  791650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:09:28.354636  791650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:09:28.528741  791650 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:09:28.528851  791650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:09:28.532701  791650 start.go:574] Will wait 60s for crictl version
	I0111 09:09:28.532787  791650 ssh_runner.go:195] Run: which crictl
	I0111 09:09:28.536153  791650 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:09:28.559429  791650 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:09:28.559577  791650 ssh_runner.go:195] Run: crio --version
	I0111 09:09:28.587275  791650 ssh_runner.go:195] Run: crio --version
	I0111 09:09:28.619438  791650 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 09:09:28.622291  791650 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-588333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:09:28.638175  791650 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 09:09:28.641834  791650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:09:28.651336  791650 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:09:28.651458  791650 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:09:28.651520  791650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:09:28.695891  791650 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:09:28.695914  791650 crio.go:433] Images already preloaded, skipping extraction
	I0111 09:09:28.695975  791650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:09:28.720523  791650 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:09:28.720550  791650 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:09:28.720558  791650 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I0111 09:09:28.720665  791650 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-588333 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:09:28.720754  791650 ssh_runner.go:195] Run: crio config
	I0111 09:09:28.791018  791650 cni.go:84] Creating CNI manager for ""
	I0111 09:09:28.791044  791650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:09:28.791065  791650 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 09:09:28.791092  791650 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-588333 NodeName:default-k8s-diff-port-588333 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:09:28.791221  791650 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-588333"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:09:28.791294  791650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 09:09:28.800349  791650 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:09:28.800418  791650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:09:28.807958  791650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0111 09:09:28.821029  791650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:09:28.833944  791650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I0111 09:09:28.846553  791650 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:09:28.849973  791650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:09:28.859352  791650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:09:28.980078  791650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:09:28.996864  791650 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333 for IP: 192.168.76.2
	I0111 09:09:28.996891  791650 certs.go:195] generating shared ca certs ...
	I0111 09:09:28.996908  791650 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:28.997137  791650 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:09:28.997208  791650 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:09:28.997223  791650 certs.go:257] generating profile certs ...
	I0111 09:09:28.997365  791650 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/client.key
	I0111 09:09:28.997467  791650 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/apiserver.key.04b53819
	I0111 09:09:28.997575  791650 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/proxy-client.key
	I0111 09:09:28.997736  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:09:28.997786  791650 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:09:28.997815  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:09:28.997855  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:09:28.997898  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:09:28.997945  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:09:28.998019  791650 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:09:28.998822  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:09:29.017911  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:09:29.037882  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:09:29.055894  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:09:29.073010  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0111 09:09:29.098046  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 09:09:29.117492  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:09:29.135566  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 09:09:29.154676  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:09:29.181160  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:09:29.207960  791650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:09:29.237679  791650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:09:29.257268  791650 ssh_runner.go:195] Run: openssl version
	I0111 09:09:29.266261  791650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:09:29.275971  791650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:09:29.288738  791650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:09:29.292732  791650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:09:29.292851  791650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:09:29.337216  791650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:09:29.344632  791650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:09:29.353065  791650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:09:29.360538  791650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:09:29.364255  791650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:09:29.364346  791650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:09:29.405648  791650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:09:29.413104  791650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:09:29.421414  791650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:09:29.429132  791650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:09:29.433002  791650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:09:29.433107  791650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:09:29.474697  791650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:09:29.482684  791650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:09:29.486584  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0111 09:09:29.527897  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0111 09:09:29.569134  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0111 09:09:29.611091  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0111 09:09:29.670651  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0111 09:09:29.721150  791650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0111 09:09:29.784729  791650 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-588333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-588333 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:09:29.784879  791650 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:09:29.784989  791650 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:09:29.867596  791650 cri.go:96] found id: "e7c36bd895a088cee8bf01ae0ac34e5e8eb26713282675fc6d4788401b926477"
	I0111 09:09:29.867668  791650 cri.go:96] found id: "076f1fdf555e06139c3c03315dc96b76587a0287090e0f05c5db8be14ea7a439"
	I0111 09:09:29.867696  791650 cri.go:96] found id: "2ae07275c0ab7a01e2063bff151242ed44a810e62093e55ae36786b9db6a2095"
	I0111 09:09:29.867740  791650 cri.go:96] found id: "6f627745c3daad695d0b29049d2cbdb0651dcdbf59d1dfadfe4715bf0735f857"
	I0111 09:09:29.867762  791650 cri.go:96] found id: ""
	I0111 09:09:29.867857  791650 ssh_runner.go:195] Run: sudo runc list -f json
	W0111 09:09:29.883104  791650 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:09:29Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:09:29.883254  791650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:09:29.903552  791650 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0111 09:09:29.903639  791650 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0111 09:09:29.903731  791650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0111 09:09:29.913213  791650 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0111 09:09:29.914212  791650 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-588333" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:09:29.914851  791650 kubeconfig.go:62] /home/jenkins/minikube-integration/22402-575040/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-588333" cluster setting kubeconfig missing "default-k8s-diff-port-588333" context setting]
	I0111 09:09:29.919146  791650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:29.925295  791650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0111 09:09:29.941053  791650 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0111 09:09:29.941093  791650 kubeadm.go:602] duration metric: took 37.434409ms to restartPrimaryControlPlane
	I0111 09:09:29.941104  791650 kubeadm.go:403] duration metric: took 156.384319ms to StartCluster
	I0111 09:09:29.941122  791650 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:29.941202  791650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:09:29.942777  791650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:29.943037  791650 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:09:29.943248  791650 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:09:29.943294  791650 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:09:29.943363  791650 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-588333"
	I0111 09:09:29.943376  791650 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-588333"
	W0111 09:09:29.943382  791650 addons.go:248] addon storage-provisioner should already be in state true
	I0111 09:09:29.943404  791650 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:09:29.943863  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:29.944324  791650 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-588333"
	I0111 09:09:29.944454  791650 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-588333"
	W0111 09:09:29.944481  791650 addons.go:248] addon dashboard should already be in state true
	I0111 09:09:29.944534  791650 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:09:29.945037  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:29.944379  791650 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-588333"
	I0111 09:09:29.948385  791650 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-588333"
	I0111 09:09:29.948749  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:29.949379  791650 out.go:179] * Verifying Kubernetes components...
	I0111 09:09:29.954544  791650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:09:30.020357  791650 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0111 09:09:30.020478  791650 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:09:30.021837  791650 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-588333"
	W0111 09:09:30.021861  791650 addons.go:248] addon default-storageclass should already be in state true
	I0111 09:09:30.021892  791650 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:09:30.024438  791650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:09:30.024474  791650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:09:30.024498  791650 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:09:30.024529  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:30.027390  791650 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0111 09:09:30.030413  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0111 09:09:30.030458  791650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0111 09:09:30.030547  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:30.078439  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:30.088728  791650 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:09:30.088753  791650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:09:30.088839  791650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:09:30.096867  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:30.127866  791650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:09:30.366833  791650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:09:30.371693  791650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:09:30.377144  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0111 09:09:30.377164  791650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0111 09:09:30.396794  791650 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-588333" to be "Ready" ...
	I0111 09:09:30.410192  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0111 09:09:30.410230  791650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0111 09:09:30.424991  791650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:09:30.444209  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0111 09:09:30.444235  791650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0111 09:09:30.516541  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0111 09:09:30.516566  791650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0111 09:09:30.583600  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0111 09:09:30.583626  791650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0111 09:09:30.627947  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0111 09:09:30.627973  791650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0111 09:09:30.643433  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0111 09:09:30.643458  791650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0111 09:09:30.660768  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0111 09:09:30.660795  791650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0111 09:09:30.677115  791650 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:09:30.677141  791650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0111 09:09:30.692002  791650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0111 09:09:33.077837  791650 node_ready.go:49] node "default-k8s-diff-port-588333" is "Ready"
	I0111 09:09:33.077871  791650 node_ready.go:38] duration metric: took 2.68104583s for node "default-k8s-diff-port-588333" to be "Ready" ...
	I0111 09:09:33.077886  791650 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:09:33.077947  791650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:09:34.832124  791650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.460395496s)
	I0111 09:09:34.832221  791650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.407201442s)
	I0111 09:09:34.832319  791650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.140285683s)
	I0111 09:09:34.832343  791650 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.754381812s)
	I0111 09:09:34.832734  791650 api_server.go:72] duration metric: took 4.889669771s to wait for apiserver process to appear ...
	I0111 09:09:34.832743  791650 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:09:34.832758  791650 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0111 09:09:34.835618  791650 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-588333 addons enable metrics-server
	
	I0111 09:09:34.843614  791650 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0111 09:09:34.846164  791650 api_server.go:141] control plane version: v1.35.0
	I0111 09:09:34.846239  791650 api_server.go:131] duration metric: took 13.479303ms to wait for apiserver health ...
	I0111 09:09:34.846264  791650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:09:34.850010  791650 system_pods.go:59] 8 kube-system pods found
	I0111 09:09:34.850099  791650 system_pods.go:61] "coredns-7d764666f9-2lh6p" [54a6cea1-73a3-4ca6-bd7a-afbbac903c9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:09:34.850153  791650 system_pods.go:61] "etcd-default-k8s-diff-port-588333" [ac8ac94a-7e8c-4899-98e5-a36f9dcaa48c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:09:34.850183  791650 system_pods.go:61] "kindnet-8pg22" [d8bfcb3a-747f-4072-9916-be69d991bcea] Running
	I0111 09:09:34.850209  791650 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-588333" [caae5ef6-ad07-477b-904c-95d13dd2c926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:09:34.850244  791650 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-588333" [e8274e7b-a729-43ee-8e0a-c9f156d0bdca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:09:34.850270  791650 system_pods.go:61] "kube-proxy-g4x2l" [23972631-486c-42e5-a029-569447059d31] Running
	I0111 09:09:34.850293  791650 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-588333" [0d5718df-db89-41b2-9cb6-c52b1c63fa5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:09:34.850327  791650 system_pods.go:61] "storage-provisioner" [acdfb8c3-6907-4ce4-b95f-2369474a2ece] Running
	I0111 09:09:34.850352  791650 system_pods.go:74] duration metric: took 4.067473ms to wait for pod list to return data ...
	I0111 09:09:34.850375  791650 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:09:34.851568  791650 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0111 09:09:34.853910  791650 default_sa.go:45] found service account: "default"
	I0111 09:09:34.853972  791650 default_sa.go:55] duration metric: took 3.564206ms for default service account to be created ...
	I0111 09:09:34.853997  791650 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 09:09:34.855142  791650 addons.go:530] duration metric: took 4.911849387s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0111 09:09:34.857256  791650 system_pods.go:86] 8 kube-system pods found
	I0111 09:09:34.857326  791650 system_pods.go:89] "coredns-7d764666f9-2lh6p" [54a6cea1-73a3-4ca6-bd7a-afbbac903c9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 09:09:34.857351  791650 system_pods.go:89] "etcd-default-k8s-diff-port-588333" [ac8ac94a-7e8c-4899-98e5-a36f9dcaa48c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 09:09:34.857394  791650 system_pods.go:89] "kindnet-8pg22" [d8bfcb3a-747f-4072-9916-be69d991bcea] Running
	I0111 09:09:34.857420  791650 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-588333" [caae5ef6-ad07-477b-904c-95d13dd2c926] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:09:34.857444  791650 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-588333" [e8274e7b-a729-43ee-8e0a-c9f156d0bdca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:09:34.857482  791650 system_pods.go:89] "kube-proxy-g4x2l" [23972631-486c-42e5-a029-569447059d31] Running
	I0111 09:09:34.857509  791650 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-588333" [0d5718df-db89-41b2-9cb6-c52b1c63fa5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0111 09:09:34.857530  791650 system_pods.go:89] "storage-provisioner" [acdfb8c3-6907-4ce4-b95f-2369474a2ece] Running
	I0111 09:09:34.857566  791650 system_pods.go:126] duration metric: took 3.549855ms to wait for k8s-apps to be running ...
	I0111 09:09:34.857592  791650 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 09:09:34.857677  791650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:09:34.873079  791650 system_svc.go:56] duration metric: took 15.478509ms WaitForService to wait for kubelet
	I0111 09:09:34.873158  791650 kubeadm.go:587] duration metric: took 4.930093264s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:09:34.873191  791650 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:09:34.876388  791650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:09:34.876474  791650 node_conditions.go:123] node cpu capacity is 2
	I0111 09:09:34.876521  791650 node_conditions.go:105] duration metric: took 3.307628ms to run NodePressure ...
	I0111 09:09:34.876548  791650 start.go:242] waiting for startup goroutines ...
	I0111 09:09:34.876584  791650 start.go:247] waiting for cluster config update ...
	I0111 09:09:34.876614  791650 start.go:256] writing updated cluster config ...
	I0111 09:09:34.876949  791650 ssh_runner.go:195] Run: rm -f paused
	I0111 09:09:34.880550  791650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:09:34.884862  791650 pod_ready.go:83] waiting for pod "coredns-7d764666f9-2lh6p" in "kube-system" namespace to be "Ready" or be gone ...
	W0111 09:09:36.890832  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:09:38.891576  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:09:41.393360  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 11 09:09:17 embed-certs-630626 crio[663]: time="2026-01-11T09:09:17.895813225Z" level=info msg="Started container" PID=1699 containerID=5433957fc000a476a42994d946d0e7a7cd56580b449b098078502bf7e619aca2 description=kube-system/storage-provisioner/storage-provisioner id=74bf37a4-40c5-4da7-885f-ebf0f01f30e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=99c7c68acacbab5d1ac33330b8e951fff1b9ee53aa022c69d0eef1c1fdd249ad
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.573267953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.573674063Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.579514417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.579806663Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.59221351Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.592245978Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.598890256Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.598996456Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.599027669Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.603326678Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:09:27 embed-certs-630626 crio[663]: time="2026-01-11T09:09:27.603359286Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.664777693Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=da0c32f5-bae8-4da1-a8ab-5b8fd82f91a9 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.666082563Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6a09b385-d11f-446f-9098-af608222ea90 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.667161273Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p/dashboard-metrics-scraper" id=a2aa3b92-68a1-4540-8e22-5645a8ec56fe name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.667298275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.674232502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.675245463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.69964425Z" level=info msg="Created container c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p/dashboard-metrics-scraper" id=a2aa3b92-68a1-4540-8e22-5645a8ec56fe name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.701993236Z" level=info msg="Starting container: c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e" id=79314b50-bcb0-4ad3-bde0-77e1cd355dc4 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.705141962Z" level=info msg="Started container" PID=1785 containerID=c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p/dashboard-metrics-scraper id=79314b50-bcb0-4ad3-bde0-77e1cd355dc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=59c8d502bfe5c479afc6b06de52ac2a4b3088261d6a5bad1d877ccbe6e78b897
	Jan 11 09:09:35 embed-certs-630626 conmon[1783]: conmon c12b2180c9cbc5cb5860 <ninfo>: container 1785 exited with status 1
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.914342746Z" level=info msg="Removing container: d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3" id=122b12c3-cd0f-4e39-9786-6a6c42d14aef name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.923287132Z" level=info msg="Error loading conmon cgroup of container d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3: cgroup deleted" id=122b12c3-cd0f-4e39-9786-6a6c42d14aef name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:09:35 embed-certs-630626 crio[663]: time="2026-01-11T09:09:35.926337526Z" level=info msg="Removed container d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p/dashboard-metrics-scraper" id=122b12c3-cd0f-4e39-9786-6a6c42d14aef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c12b2180c9cbc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   3                   59c8d502bfe5c       dashboard-metrics-scraper-867fb5f87b-x8s5p   kubernetes-dashboard
	5433957fc000a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   99c7c68acacba       storage-provisioner                          kube-system
	aed85adc7d903       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   49 seconds ago       Running             kubernetes-dashboard        0                   ff97040e673ff       kubernetes-dashboard-b84665fb8-wpbkc         kubernetes-dashboard
	a82b2a8a7fc65       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           58 seconds ago       Running             coredns                     1                   472c7eaeaf8db       coredns-7d764666f9-x5tzj                     kube-system
	fea5d632e5f45       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   cce5a649e60c6       busybox                                      default
	7cc6dfe7ebe69       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   99c7c68acacba       storage-provisioner                          kube-system
	444e2483c5a5d       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           58 seconds ago       Running             kube-proxy                  1                   be084ecc79684       kube-proxy-7xnsq                             kube-system
	e7c65de22a34f       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           58 seconds ago       Running             kindnet-cni                 1                   61f49c7d69c97       kindnet-w5nb5                                kube-system
	59166e3edc5b1       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   529ffd9f35d84       kube-scheduler-embed-certs-630626            kube-system
	d655f1b34c99b       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   cbdb13b39be3a       kube-apiserver-embed-certs-630626            kube-system
	6e1ee699631c6       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   44ff1c72fd931       etcd-embed-certs-630626                      kube-system
	50f8850ccb505       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   91a2d9255c418       kube-controller-manager-embed-certs-630626   kube-system
	
	
	==> coredns [a82b2a8a7fc65f783a5f00fca30865fd5660c27d20ba8985f978a9336000e0ea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37777 - 90 "HINFO IN 6778001374049786220.61113177844581166. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.012620339s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-630626
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-630626
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=embed-certs-630626
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_07_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-630626
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:09:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:09:17 +0000   Sun, 11 Jan 2026 09:07:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:09:17 +0000   Sun, 11 Jan 2026 09:07:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:09:17 +0000   Sun, 11 Jan 2026 09:07:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:09:17 +0000   Sun, 11 Jan 2026 09:08:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-630626
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                c5657d65-a5db-44ef-92ca-1ef6faf268e8
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-7d764666f9-x5tzj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-embed-certs-630626                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-w5nb5                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-embed-certs-630626             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-embed-certs-630626    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-7xnsq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-embed-certs-630626             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-x8s5p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-wpbkc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  115s  node-controller  Node embed-certs-630626 event: Registered Node embed-certs-630626 in Controller
	  Normal  RegisteredNode  56s   node-controller  Node embed-certs-630626 event: Registered Node embed-certs-630626 in Controller
	
	
	==> dmesg <==
	[Jan11 08:38] overlayfs: idmapped layers are currently not supported
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	[Jan11 09:08] overlayfs: idmapped layers are currently not supported
	[ +12.491892] overlayfs: idmapped layers are currently not supported
	[Jan11 09:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6e1ee699631c60b05b3bf5f637dc3dc66eaa29e2df72af24028e423f9e31416f] <==
	{"level":"info","ts":"2026-01-11T09:08:41.919556Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:08:41.976617Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:08:41.976534Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-11T09:08:41.976601Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-11T09:08:42.014900Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-11T09:08:42.015032Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T09:08:42.015121Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T09:08:42.093224Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:42.093372Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:42.093515Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:08:42.093530Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:08:42.093546Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:08:42.127688Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:08:42.127764Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:08:42.127789Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:08:42.127819Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:08:42.149859Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:08:42.159483Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:08:42.159561Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:08:42.165267Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:08:42.149711Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-630626 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:08:42.165983Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:08:42.216064Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:08:42.333337Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T09:08:42.334112Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:09:45 up  3:52,  0 user,  load average: 3.16, 2.15, 2.01
	Linux embed-certs-630626 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e7c65de22a34fdcd786dca28f03d4318acafc8cc56ddf2febf531b131750a055] <==
	I0111 09:08:47.367938       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:08:47.368163       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:08:47.368298       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:08:47.368310       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:08:47.368320       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:08:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:08:47.566655       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:08:47.566674       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:08:47.566681       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:08:47.566970       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0111 09:09:17.569224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0111 09:09:17.569228       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0111 09:09:17.569397       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0111 09:09:17.569448       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0111 09:09:18.966771       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:09:18.966893       1 metrics.go:72] Registering metrics
	I0111 09:09:18.966980       1 controller.go:711] "Syncing nftables rules"
	I0111 09:09:27.566265       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:09:27.566930       1 main.go:301] handling current node
	I0111 09:09:37.569132       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0111 09:09:37.569207       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d655f1b34c99b7061f83f1625edf83fdeafc1d3bd3a3df8027784d5a67499088] <==
	I0111 09:08:46.499582       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 09:08:46.499702       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:46.500316       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0111 09:08:46.503454       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:46.504567       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 09:08:46.506368       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 09:08:46.507004       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 09:08:46.525878       1 aggregator.go:187] initial CRD sync complete...
	I0111 09:08:46.525974       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 09:08:46.526005       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 09:08:46.526034       1 cache.go:39] Caches are synced for autoregister controller
	I0111 09:08:46.527298       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 09:08:46.536577       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0111 09:08:46.590960       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 09:08:46.675966       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:08:46.996763       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:08:47.533757       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:08:47.626349       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:08:47.681046       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:08:47.698635       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:08:47.894411       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.142.184"}
	I0111 09:08:47.946640       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.115.244"}
	I0111 09:08:49.804529       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 09:08:49.905926       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 09:08:50.005947       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [50f8850ccb505fa89954b440b9419765295b2320ecae2ea5cb7da62fd4a99f39] <==
	I0111 09:08:49.355600       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.355677       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.354458       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.354840       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.355040       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.355026       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.355095       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.356681       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.357719       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.358104       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.358185       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.358403       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.359071       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.359526       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-630626"
	I0111 09:08:49.359589       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0111 09:08:49.355090       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.363079       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.364159       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.364191       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.371018       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:08:49.383842       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.455205       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:49.455231       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:08:49.455237       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:08:49.472091       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [444e2483c5a5dafffda230325a3219f14c242a9d4a210093339135b8a262b2cc] <==
	I0111 09:08:47.777089       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:08:47.995798       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:08:48.096694       1 shared_informer.go:377] "Caches are synced"
	I0111 09:08:48.096726       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 09:08:48.096798       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:08:48.117264       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:08:48.117388       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:08:48.121425       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:08:48.122034       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:08:48.122162       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:08:48.126558       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:08:48.126597       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:08:48.126764       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:08:48.126803       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:08:48.126878       1 config.go:200] "Starting service config controller"
	I0111 09:08:48.126896       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:08:48.126915       1 config.go:309] "Starting node config controller"
	I0111 09:08:48.126919       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:08:48.227294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 09:08:48.227431       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:08:48.227479       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:08:48.227499       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [59166e3edc5b1c5b88038cb476fcc1bb937cc685c07c9cc1684740b373d960e6] <==
	I0111 09:08:43.963051       1 serving.go:386] Generated self-signed cert in-memory
	W0111 09:08:46.367997       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 09:08:46.374246       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:08:46.374282       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 09:08:46.374289       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 09:08:46.466883       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 09:08:46.466921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:08:46.474760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 09:08:46.474873       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:08:46.474892       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:08:46.474910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 09:08:46.576176       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:09:02 embed-certs-630626 kubelet[793]: I0111 09:09:02.822757     793 scope.go:122] "RemoveContainer" containerID="95603203503cb5d2056ca1af15b778734b13181a4fb8bd9184ba3b904b7dd8b5"
	Jan 11 09:09:02 embed-certs-630626 kubelet[793]: E0111 09:09:02.823001     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:10 embed-certs-630626 kubelet[793]: E0111 09:09:10.477587     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:10 embed-certs-630626 kubelet[793]: I0111 09:09:10.477636     793 scope.go:122] "RemoveContainer" containerID="95603203503cb5d2056ca1af15b778734b13181a4fb8bd9184ba3b904b7dd8b5"
	Jan 11 09:09:10 embed-certs-630626 kubelet[793]: E0111 09:09:10.477819     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: E0111 09:09:11.664880     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: I0111 09:09:11.664931     793 scope.go:122] "RemoveContainer" containerID="95603203503cb5d2056ca1af15b778734b13181a4fb8bd9184ba3b904b7dd8b5"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: I0111 09:09:11.846402     793 scope.go:122] "RemoveContainer" containerID="95603203503cb5d2056ca1af15b778734b13181a4fb8bd9184ba3b904b7dd8b5"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: E0111 09:09:11.846720     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: I0111 09:09:11.846748     793 scope.go:122] "RemoveContainer" containerID="d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3"
	Jan 11 09:09:11 embed-certs-630626 kubelet[793]: E0111 09:09:11.846899     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:17 embed-certs-630626 kubelet[793]: I0111 09:09:17.863733     793 scope.go:122] "RemoveContainer" containerID="7cc6dfe7ebe69d7fa2e4a83fcc9f97ca76f25f233e8dec6c17d486be7da04784"
	Jan 11 09:09:20 embed-certs-630626 kubelet[793]: E0111 09:09:20.478513     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:20 embed-certs-630626 kubelet[793]: I0111 09:09:20.478566     793 scope.go:122] "RemoveContainer" containerID="d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3"
	Jan 11 09:09:20 embed-certs-630626 kubelet[793]: E0111 09:09:20.478740     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:24 embed-certs-630626 kubelet[793]: E0111 09:09:24.993113     793 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-x5tzj" containerName="coredns"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: E0111 09:09:35.664238     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: I0111 09:09:35.664288     793 scope.go:122] "RemoveContainer" containerID="d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: I0111 09:09:35.910510     793 scope.go:122] "RemoveContainer" containerID="d236bafbe26a33e42f275e09e361c53d546d69843f3e78cfc8ca93d6394cf0a3"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: E0111 09:09:35.911128     793 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: I0111 09:09:35.911459     793 scope.go:122] "RemoveContainer" containerID="c12b2180c9cbc5cb5860b6e1ebf15038723f376a03d7b7c5a71dfb5c3ccf4a8e"
	Jan 11 09:09:35 embed-certs-630626 kubelet[793]: E0111 09:09:35.911741     793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-x8s5p_kubernetes-dashboard(9e194a30-7def-4c03-bd28-f49617b490f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-x8s5p" podUID="9e194a30-7def-4c03-bd28-f49617b490f7"
	Jan 11 09:09:39 embed-certs-630626 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:09:39 embed-certs-630626 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:09:39 embed-certs-630626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [aed85adc7d903d573ee408934699b77dc8ca903cc510c2b4cdc9390e57686b60] <==
	2026/01/11 09:08:55 Using namespace: kubernetes-dashboard
	2026/01/11 09:08:55 Using in-cluster config to connect to apiserver
	2026/01/11 09:08:55 Using secret token for csrf signing
	2026/01/11 09:08:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 09:08:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 09:08:55 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 09:08:55 Generating JWE encryption key
	2026/01/11 09:08:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 09:08:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 09:08:56 Initializing JWE encryption key from synchronized object
	2026/01/11 09:08:56 Creating in-cluster Sidecar client
	2026/01/11 09:08:56 Serving insecurely on HTTP port: 9090
	2026/01/11 09:08:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:09:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:08:55 Starting overwatch
	
	
	==> storage-provisioner [5433957fc000a476a42994d946d0e7a7cd56580b449b098078502bf7e619aca2] <==
	I0111 09:09:17.924516       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:09:17.924663       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 09:09:17.927427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:21.383103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:25.643830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:29.242490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:32.296964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:35.319323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:35.326711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:09:35.326874       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:09:35.327116       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-630626_3264393d-914f-4f6c-81a8-aba39890042d!
	I0111 09:09:35.334213       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5e8d4ca1-b478-4fe9-ac57-5e4f0fb583ee", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-630626_3264393d-914f-4f6c-81a8-aba39890042d became leader
	W0111 09:09:35.335050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:35.338180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:09:35.429005       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-630626_3264393d-914f-4f6c-81a8-aba39890042d!
	W0111 09:09:37.340949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:37.349027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:39.353123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:39.361219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:41.373621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:41.387685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:43.390937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:43.401824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:45.405831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:09:45.418072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7cc6dfe7ebe69d7fa2e4a83fcc9f97ca76f25f233e8dec6c17d486be7da04784] <==
	I0111 09:08:47.459123       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 09:09:17.466958       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-630626 -n embed-certs-630626
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-630626 -n embed-certs-630626: exit status 2 (518.346876ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-630626 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-193049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-193049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (358.05048ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:10:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-193049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-193049
helpers_test.go:244: (dbg) docker inspect newest-cni-193049:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73",
	        "Created": "2026-01-11T09:09:55.930458937Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 795641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:09:56.005453951Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/hostname",
	        "HostsPath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/hosts",
	        "LogPath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73-json.log",
	        "Name": "/newest-cni-193049",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-193049:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-193049",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73",
	                "LowerDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-193049",
	                "Source": "/var/lib/docker/volumes/newest-cni-193049/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-193049",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-193049",
	                "name.minikube.sigs.k8s.io": "newest-cni-193049",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b8fb86ad01421f7132e9447dc4d7d1c7a5a9e8be4253a04d69463ad7744a5c0",
	            "SandboxKey": "/var/run/docker/netns/7b8fb86ad014",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-193049": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:76:54:2a:bc:7d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74db70392a94307fb92c8a30f920a21debbaee70569c0d4609fca3634546fe0e",
	                    "EndpointID": "7211409f349c632d108474f1c828e8801c9885525a092673d99e02f6e2bef67c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-193049",
	                        "40fddecbe5bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-193049 -n newest-cni-193049
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-193049 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-193049 logs -n 25: (1.579225168s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-236664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ start   │ -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:06 UTC │ 11 Jan 26 09:06 UTC │
	│ image   │ no-preload-236664 image list --format=json                                                                                                                                                                                                    │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ pause   │ -p no-preload-236664 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:08 UTC │
	│ ssh     │ force-systemd-flag-630015 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p force-systemd-flag-630015                                                                                                                                                                                                                  │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p disable-driver-mounts-781777                                                                                                                                                                                                               │ disable-driver-mounts-781777 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │                     │
	│ stop    │ -p embed-certs-630626 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-630626 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-588333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-588333 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-588333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:10 UTC │
	│ image   │ embed-certs-630626 image list --format=json                                                                                                                                                                                                   │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ pause   │ -p embed-certs-630626 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ delete  │ -p embed-certs-630626                                                                                                                                                                                                                         │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ delete  │ -p embed-certs-630626                                                                                                                                                                                                                         │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:10 UTC │
	│ addons  │ enable metrics-server -p newest-cni-193049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:09:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:09:50.385936  795222 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:09:50.386413  795222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:50.386421  795222 out.go:374] Setting ErrFile to fd 2...
	I0111 09:09:50.386427  795222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:50.387027  795222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:09:50.387648  795222 out.go:368] Setting JSON to false
	I0111 09:09:50.391812  795222 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13940,"bootTime":1768108650,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:09:50.392566  795222 start.go:143] virtualization:  
	I0111 09:09:50.395980  795222 out.go:179] * [newest-cni-193049] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:09:50.400151  795222 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:09:50.400495  795222 notify.go:221] Checking for updates...
	I0111 09:09:50.406640  795222 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:09:50.410558  795222 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:09:50.414364  795222 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:09:50.418529  795222 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:09:50.422744  795222 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:09:50.427403  795222 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:09:50.427519  795222 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:09:50.486944  795222 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:09:50.487156  795222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:09:50.592261  795222 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 09:09:50.580785318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:09:50.592369  795222 docker.go:319] overlay module found
	I0111 09:09:50.596178  795222 out.go:179] * Using the docker driver based on user configuration
	I0111 09:09:50.599095  795222 start.go:309] selected driver: docker
	I0111 09:09:50.599117  795222 start.go:928] validating driver "docker" against <nil>
	I0111 09:09:50.599131  795222 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:09:50.599843  795222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:09:50.700670  795222 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 09:09:50.691856398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:09:50.700813  795222 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0111 09:09:50.700836  795222 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0111 09:09:50.701052  795222 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 09:09:50.704049  795222 out.go:179] * Using Docker driver with root privileges
	I0111 09:09:50.706852  795222 cni.go:84] Creating CNI manager for ""
	I0111 09:09:50.706912  795222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:09:50.706936  795222 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 09:09:50.707020  795222 start.go:353] cluster config:
	{Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:09:50.710085  795222 out.go:179] * Starting "newest-cni-193049" primary control-plane node in "newest-cni-193049" cluster
	I0111 09:09:50.712878  795222 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:09:50.715792  795222 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:09:50.718616  795222 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:09:50.718665  795222 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:09:50.718675  795222 cache.go:65] Caching tarball of preloaded images
	I0111 09:09:50.718780  795222 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:09:50.718791  795222 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 09:09:50.718942  795222 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:09:50.719200  795222 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/config.json ...
	I0111 09:09:50.719233  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/config.json: {Name:mk299c7cbb34a339c1735751e4dbb1bf3f8d929c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:50.766929  795222 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:09:50.766952  795222 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:09:50.766972  795222 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:09:50.767009  795222 start.go:360] acquireMachinesLock for newest-cni-193049: {Name:mkf4b4913de610081a1f70a8057cb410a71fc0bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:09:50.767127  795222 start.go:364] duration metric: took 97.749µs to acquireMachinesLock for "newest-cni-193049"
	I0111 09:09:50.767158  795222 start.go:93] Provisioning new machine with config: &{Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:09:50.767233  795222 start.go:125] createHost starting for "" (driver="docker")
	W0111 09:09:48.391636  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:09:50.891454  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:09:50.770736  795222 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 09:09:50.770974  795222 start.go:159] libmachine.API.Create for "newest-cni-193049" (driver="docker")
	I0111 09:09:50.771008  795222 client.go:173] LocalClient.Create starting
	I0111 09:09:50.771083  795222 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 09:09:50.771127  795222 main.go:144] libmachine: Decoding PEM data...
	I0111 09:09:50.771142  795222 main.go:144] libmachine: Parsing certificate...
	I0111 09:09:50.771197  795222 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 09:09:50.771220  795222 main.go:144] libmachine: Decoding PEM data...
	I0111 09:09:50.771232  795222 main.go:144] libmachine: Parsing certificate...
	I0111 09:09:50.771608  795222 cli_runner.go:164] Run: docker network inspect newest-cni-193049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 09:09:50.791729  795222 cli_runner.go:211] docker network inspect newest-cni-193049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 09:09:50.791819  795222 network_create.go:284] running [docker network inspect newest-cni-193049] to gather additional debugging logs...
	I0111 09:09:50.791844  795222 cli_runner.go:164] Run: docker network inspect newest-cni-193049
	W0111 09:09:50.808262  795222 cli_runner.go:211] docker network inspect newest-cni-193049 returned with exit code 1
	I0111 09:09:50.808297  795222 network_create.go:287] error running [docker network inspect newest-cni-193049]: docker network inspect newest-cni-193049: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-193049 not found
	I0111 09:09:50.808310  795222 network_create.go:289] output of [docker network inspect newest-cni-193049]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-193049 not found
	
	** /stderr **
	I0111 09:09:50.808423  795222 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:09:50.824234  795222 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
	I0111 09:09:50.824565  795222 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461c1a9d970d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:7e:25:fe:d0:0d} reservation:<nil>}
	I0111 09:09:50.824898  795222 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38e10816f85 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:42:af:ae:32:ae} reservation:<nil>}
	I0111 09:09:50.825179  795222 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fa19db219143 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:5f:6b:c8:86:a5} reservation:<nil>}
	I0111 09:09:50.825574  795222 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197b140}
	I0111 09:09:50.825601  795222 network_create.go:124] attempt to create docker network newest-cni-193049 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 09:09:50.825663  795222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-193049 newest-cni-193049
	I0111 09:09:50.892444  795222 network_create.go:108] docker network newest-cni-193049 192.168.85.0/24 created
	I0111 09:09:50.892475  795222 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-193049" container
	I0111 09:09:50.892561  795222 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 09:09:50.911847  795222 cli_runner.go:164] Run: docker volume create newest-cni-193049 --label name.minikube.sigs.k8s.io=newest-cni-193049 --label created_by.minikube.sigs.k8s.io=true
	I0111 09:09:50.931715  795222 oci.go:103] Successfully created a docker volume newest-cni-193049
	I0111 09:09:50.931797  795222 cli_runner.go:164] Run: docker run --rm --name newest-cni-193049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-193049 --entrypoint /usr/bin/test -v newest-cni-193049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 09:09:51.767625  795222 oci.go:107] Successfully prepared a docker volume newest-cni-193049
	I0111 09:09:51.767692  795222 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:09:51.767702  795222 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 09:09:51.767783  795222 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-193049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	W0111 09:09:53.391517  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:09:55.891786  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:09:55.855837  795222 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-193049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.087990293s)
	I0111 09:09:55.855885  795222 kic.go:203] duration metric: took 4.088179465s to extract preloaded images to volume ...
	W0111 09:09:55.856030  795222 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 09:09:55.856142  795222 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 09:09:55.915816  795222 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-193049 --name newest-cni-193049 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-193049 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-193049 --network newest-cni-193049 --ip 192.168.85.2 --volume newest-cni-193049:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 09:09:56.242985  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Running}}
	I0111 09:09:56.271620  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:09:56.291601  795222 cli_runner.go:164] Run: docker exec newest-cni-193049 stat /var/lib/dpkg/alternatives/iptables
	I0111 09:09:56.344044  795222 oci.go:144] the created container "newest-cni-193049" has a running status.
	I0111 09:09:56.344071  795222 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa...
	I0111 09:09:56.573630  795222 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 09:09:56.610677  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:09:56.635843  795222 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 09:09:56.635863  795222 kic_runner.go:114] Args: [docker exec --privileged newest-cni-193049 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 09:09:56.687624  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:09:56.713420  795222 machine.go:94] provisionDockerMachine start ...
	I0111 09:09:56.713510  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:09:56.743200  795222 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:56.743539  795222 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I0111 09:09:56.743549  795222 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:09:56.744194  795222 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38382->127.0.0.1:33823: read: connection reset by peer
	I0111 09:09:59.894198  795222 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-193049
	
	I0111 09:09:59.894227  795222 ubuntu.go:182] provisioning hostname "newest-cni-193049"
	I0111 09:09:59.894302  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:09:59.912243  795222 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:59.912566  795222 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I0111 09:09:59.912587  795222 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-193049 && echo "newest-cni-193049" | sudo tee /etc/hostname
	I0111 09:10:00.173158  795222 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-193049
	
	I0111 09:10:00.173259  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:00.309149  795222 main.go:144] libmachine: Using SSH client type: native
	I0111 09:10:00.309504  795222 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I0111 09:10:00.309522  795222 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-193049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-193049/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-193049' | sudo tee -a /etc/hosts; 
				fi
			fi
	W0111 09:09:58.390485  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:10:00.436328  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:10:00.611809  795222 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:10:00.611843  795222 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:10:00.611879  795222 ubuntu.go:190] setting up certificates
	I0111 09:10:00.611935  795222 provision.go:84] configureAuth start
	I0111 09:10:00.612039  795222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-193049
	I0111 09:10:00.644171  795222 provision.go:143] copyHostCerts
	I0111 09:10:00.644291  795222 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:10:00.644317  795222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:10:00.644444  795222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:10:00.645577  795222 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:10:00.645603  795222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:10:00.645674  795222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:10:00.645774  795222 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:10:00.645788  795222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:10:00.645819  795222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:10:00.645888  795222 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.newest-cni-193049 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-193049]
	I0111 09:10:00.799206  795222 provision.go:177] copyRemoteCerts
	I0111 09:10:00.799276  795222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:10:00.799323  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:00.818604  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:00.926554  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:10:00.945700  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 09:10:00.968132  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 09:10:00.988340  795222 provision.go:87] duration metric: took 376.37212ms to configureAuth
	I0111 09:10:00.988369  795222 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:10:00.988595  795222 config.go:182] Loaded profile config "newest-cni-193049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:10:00.988710  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.006699  795222 main.go:144] libmachine: Using SSH client type: native
	I0111 09:10:01.007056  795222 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I0111 09:10:01.007088  795222 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:10:01.405646  795222 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:10:01.405672  795222 machine.go:97] duration metric: took 4.692231642s to provisionDockerMachine
	I0111 09:10:01.405683  795222 client.go:176] duration metric: took 10.634664793s to LocalClient.Create
	I0111 09:10:01.405697  795222 start.go:167] duration metric: took 10.634725807s to libmachine.API.Create "newest-cni-193049"
	I0111 09:10:01.405704  795222 start.go:293] postStartSetup for "newest-cni-193049" (driver="docker")
	I0111 09:10:01.405715  795222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:10:01.405796  795222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:10:01.405840  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.426905  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:01.534248  795222 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:10:01.537806  795222 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:10:01.537885  795222 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:10:01.537911  795222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:10:01.537990  795222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:10:01.538082  795222 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:10:01.538222  795222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:10:01.545945  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:10:01.564737  795222 start.go:296] duration metric: took 159.01769ms for postStartSetup
	I0111 09:10:01.565192  795222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-193049
	I0111 09:10:01.582671  795222 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/config.json ...
	I0111 09:10:01.582975  795222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:10:01.583019  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.600343  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:01.707651  795222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:10:01.712912  795222 start.go:128] duration metric: took 10.945663035s to createHost
	I0111 09:10:01.712942  795222 start.go:83] releasing machines lock for "newest-cni-193049", held for 10.945803025s
	I0111 09:10:01.713014  795222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-193049
	I0111 09:10:01.729844  795222 ssh_runner.go:195] Run: cat /version.json
	I0111 09:10:01.729913  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.730245  795222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:10:01.730306  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.753698  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:01.767662  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:01.866237  795222 ssh_runner.go:195] Run: systemctl --version
	I0111 09:10:01.977020  795222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:10:02.023568  795222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:10:02.028714  795222 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:10:02.028800  795222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:10:02.060947  795222 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 09:10:02.060970  795222 start.go:496] detecting cgroup driver to use...
	I0111 09:10:02.061004  795222 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:10:02.061069  795222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:10:02.080802  795222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:10:02.094626  795222 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:10:02.094779  795222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:10:02.114330  795222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:10:02.134714  795222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:10:02.269714  795222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:10:02.400465  795222 docker.go:234] disabling docker service ...
	I0111 09:10:02.400558  795222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:10:02.423679  795222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:10:02.437461  795222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:10:02.567821  795222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:10:02.693686  795222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:10:02.707900  795222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:10:02.722033  795222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 09:10:02.722116  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.732266  795222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:10:02.732355  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.741676  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.751422  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.760766  795222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:10:02.769423  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.778601  795222 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.793171  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.802761  795222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:10:02.811220  795222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:10:02.819361  795222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:10:02.935914  795222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:10:03.115713  795222 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:10:03.115822  795222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:10:03.119903  795222 start.go:574] Will wait 60s for crictl version
	I0111 09:10:03.120047  795222 ssh_runner.go:195] Run: which crictl
	I0111 09:10:03.123662  795222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:10:03.151386  795222 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:10:03.151546  795222 ssh_runner.go:195] Run: crio --version
	I0111 09:10:03.184015  795222 ssh_runner.go:195] Run: crio --version
	I0111 09:10:03.216870  795222 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 09:10:03.219692  795222 cli_runner.go:164] Run: docker network inspect newest-cni-193049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:10:03.237170  795222 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:10:03.241562  795222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:10:03.255686  795222 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0111 09:10:03.258549  795222 kubeadm.go:884] updating cluster {Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:10:03.258720  795222 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:10:03.258808  795222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:10:03.307172  795222 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:10:03.307199  795222 crio.go:433] Images already preloaded, skipping extraction
	I0111 09:10:03.307262  795222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:10:03.333822  795222 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:10:03.333850  795222 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:10:03.333859  795222 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 09:10:03.333942  795222 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-193049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:10:03.334031  795222 ssh_runner.go:195] Run: crio config
	I0111 09:10:03.406378  795222 cni.go:84] Creating CNI manager for ""
	I0111 09:10:03.406406  795222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:10:03.406427  795222 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0111 09:10:03.406453  795222 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-193049 NodeName:newest-cni-193049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:10:03.406603  795222 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-193049"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:10:03.406683  795222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 09:10:03.414920  795222 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:10:03.415008  795222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:10:03.423080  795222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0111 09:10:03.440348  795222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:10:03.457904  795222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0111 09:10:03.474671  795222 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:10:03.478617  795222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:10:03.488841  795222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:10:03.605985  795222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:10:03.624569  795222 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049 for IP: 192.168.85.2
	I0111 09:10:03.624593  795222 certs.go:195] generating shared ca certs ...
	I0111 09:10:03.624609  795222 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.624751  795222 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:10:03.624800  795222 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:10:03.624810  795222 certs.go:257] generating profile certs ...
	I0111 09:10:03.624863  795222 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.key
	I0111 09:10:03.624905  795222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.crt with IP's: []
	I0111 09:10:03.719493  795222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.crt ...
	I0111 09:10:03.719527  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.crt: {Name:mk337c4d1ac253622d62a845d0c98d56efc55a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.719738  795222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.key ...
	I0111 09:10:03.719753  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.key: {Name:mka494a3b746d2c5b74df371fe6fcf9db4133d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.719855  795222 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key.452904eb
	I0111 09:10:03.719874  795222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt.452904eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 09:10:03.832895  795222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt.452904eb ...
	I0111 09:10:03.832925  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt.452904eb: {Name:mkeea2df728596d775e3b25db2cc5a9d45ceec4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.833109  795222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key.452904eb ...
	I0111 09:10:03.833123  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key.452904eb: {Name:mk2e52d7929d51b03bb2a19c571839aff9b24ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.833222  795222 certs.go:382] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt.452904eb -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt
	I0111 09:10:03.833303  795222 certs.go:386] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key.452904eb -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key
	I0111 09:10:03.833368  795222 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.key
	I0111 09:10:03.833386  795222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.crt with IP's: []
	I0111 09:10:04.193953  795222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.crt ...
	I0111 09:10:04.193985  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.crt: {Name:mkb856bd4da9b67fe469d2e739f585ce3b0d4637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:04.194191  795222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.key ...
	I0111 09:10:04.194208  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.key: {Name:mkcea439c07934e1b9dd6c99b55d0b52c8d7c9c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:04.194404  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:10:04.194452  795222 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:10:04.194467  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:10:04.194497  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:10:04.194526  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:10:04.194554  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:10:04.194611  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:10:04.195183  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:10:04.216174  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:10:04.237551  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:10:04.257045  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:10:04.277618  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0111 09:10:04.300447  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 09:10:04.320876  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:10:04.339781  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 09:10:04.362890  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:10:04.382787  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:10:04.405970  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:10:04.427126  795222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:10:04.446103  795222 ssh_runner.go:195] Run: openssl version
	I0111 09:10:04.456173  795222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:10:04.466714  795222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:10:04.478985  795222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:10:04.486907  795222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:10:04.487018  795222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:10:04.541304  795222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:10:04.559931  795222 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5769072.pem /etc/ssl/certs/3ec20f2e.0
	I0111 09:10:04.578304  795222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:10:04.601399  795222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:10:04.615558  795222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:10:04.620491  795222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:10:04.620611  795222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:10:04.664387  795222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:10:04.673456  795222 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 09:10:04.682025  795222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:10:04.690526  795222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:10:04.699366  795222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:10:04.703656  795222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:10:04.703777  795222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:10:04.746990  795222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:10:04.755151  795222 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/576907.pem /etc/ssl/certs/51391683.0
	I0111 09:10:04.763804  795222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:10:04.767857  795222 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 09:10:04.767955  795222 kubeadm.go:401] StartCluster: {Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:10:04.768046  795222 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:10:04.768114  795222 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:10:04.797268  795222 cri.go:96] found id: ""
	I0111 09:10:04.797390  795222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:10:04.805780  795222 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 09:10:04.815451  795222 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 09:10:04.815570  795222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 09:10:04.825100  795222 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 09:10:04.825174  795222 kubeadm.go:158] found existing configuration files:
	
	I0111 09:10:04.825258  795222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 09:10:04.833830  795222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 09:10:04.833898  795222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 09:10:04.841870  795222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 09:10:04.849815  795222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 09:10:04.849929  795222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 09:10:04.857501  795222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 09:10:04.865690  795222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 09:10:04.865809  795222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 09:10:04.873598  795222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 09:10:04.882058  795222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 09:10:04.882230  795222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 09:10:04.891888  795222 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 09:10:04.930905  795222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 09:10:04.931370  795222 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 09:10:05.022567  795222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 09:10:05.022651  795222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 09:10:05.022693  795222 kubeadm.go:319] OS: Linux
	I0111 09:10:05.022745  795222 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 09:10:05.022797  795222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 09:10:05.022846  795222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 09:10:05.022899  795222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 09:10:05.022952  795222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 09:10:05.023003  795222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 09:10:05.023053  795222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 09:10:05.023101  795222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 09:10:05.023147  795222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 09:10:05.103311  795222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 09:10:05.103525  795222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 09:10:05.103661  795222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 09:10:05.111201  795222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 09:10:05.116856  795222 out.go:252]   - Generating certificates and keys ...
	I0111 09:10:05.117033  795222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 09:10:05.117139  795222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 09:10:05.200359  795222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W0111 09:10:02.891825  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:10:04.892529  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:10:06.892952  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:10:05.599265  795222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 09:10:05.909397  795222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 09:10:06.493467  795222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 09:10:06.596775  795222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 09:10:06.597174  795222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-193049] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:10:06.940726  795222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 09:10:06.941357  795222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-193049] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:10:07.346544  795222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 09:10:07.408308  795222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 09:10:07.605470  795222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 09:10:07.605786  795222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 09:10:07.735957  795222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 09:10:08.316430  795222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 09:10:08.623663  795222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 09:10:08.790346  795222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 09:10:09.147694  795222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 09:10:09.148470  795222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 09:10:09.151289  795222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 09:10:09.154962  795222 out.go:252]   - Booting up control plane ...
	I0111 09:10:09.155167  795222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 09:10:09.155311  795222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 09:10:09.155396  795222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 09:10:09.179130  795222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 09:10:09.179450  795222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 09:10:09.188601  795222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 09:10:09.188887  795222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 09:10:09.189083  795222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:10:09.332344  795222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 09:10:09.332465  795222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W0111 09:10:09.409495  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:10:11.899838  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:10:12.390550  791650 pod_ready.go:94] pod "coredns-7d764666f9-2lh6p" is "Ready"
	I0111 09:10:12.390575  791650 pod_ready.go:86] duration metric: took 37.505644311s for pod "coredns-7d764666f9-2lh6p" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.394080  791650 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.400687  791650 pod_ready.go:94] pod "etcd-default-k8s-diff-port-588333" is "Ready"
	I0111 09:10:12.400757  791650 pod_ready.go:86] duration metric: took 6.585397ms for pod "etcd-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.408037  791650 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.412678  791650 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-588333" is "Ready"
	I0111 09:10:12.412750  791650 pod_ready.go:86] duration metric: took 4.622475ms for pod "kube-apiserver-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.416338  791650 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.589548  791650 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-588333" is "Ready"
	I0111 09:10:12.589623  791650 pod_ready.go:86] duration metric: took 173.225634ms for pod "kube-controller-manager-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.789080  791650 pod_ready.go:83] waiting for pod "kube-proxy-g4x2l" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:13.188851  791650 pod_ready.go:94] pod "kube-proxy-g4x2l" is "Ready"
	I0111 09:10:13.188937  791650 pod_ready.go:86] duration metric: took 399.77626ms for pod "kube-proxy-g4x2l" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:13.389585  791650 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:13.789192  791650 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-588333" is "Ready"
	I0111 09:10:13.789223  791650 pod_ready.go:86] duration metric: took 399.610728ms for pod "kube-scheduler-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:13.789246  791650 pod_ready.go:40] duration metric: took 38.908633874s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:10:13.870342  791650 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:10:13.873316  791650 out.go:203] 
	W0111 09:10:13.876148  791650 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:10:13.879057  791650 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:10:13.882042  791650 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-588333" cluster and "default" namespace by default
	I0111 09:10:10.834872  795222 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50188301s
	I0111 09:10:10.834984  795222 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 09:10:10.835071  795222 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0111 09:10:10.835165  795222 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 09:10:10.835247  795222 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 09:10:12.845189  795222 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.010467502s
	I0111 09:10:14.468107  795222 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.633559739s
	I0111 09:10:16.335817  795222 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501278649s
	I0111 09:10:16.370551  795222 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 09:10:16.388634  795222 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 09:10:16.407903  795222 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 09:10:16.408104  795222 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-193049 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 09:10:16.422754  795222 kubeadm.go:319] [bootstrap-token] Using token: zs68fl.2gyixjjdurk170u7
	I0111 09:10:16.424976  795222 out.go:252]   - Configuring RBAC rules ...
	I0111 09:10:16.425106  795222 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 09:10:16.432198  795222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 09:10:16.441299  795222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 09:10:16.445968  795222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 09:10:16.453571  795222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 09:10:16.459213  795222 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 09:10:16.744165  795222 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 09:10:17.208563  795222 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 09:10:17.743160  795222 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 09:10:17.744717  795222 kubeadm.go:319] 
	I0111 09:10:17.744813  795222 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 09:10:17.744829  795222 kubeadm.go:319] 
	I0111 09:10:17.744924  795222 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 09:10:17.744937  795222 kubeadm.go:319] 
	I0111 09:10:17.744970  795222 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 09:10:17.745041  795222 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 09:10:17.745107  795222 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 09:10:17.745117  795222 kubeadm.go:319] 
	I0111 09:10:17.745188  795222 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 09:10:17.745199  795222 kubeadm.go:319] 
	I0111 09:10:17.745283  795222 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 09:10:17.745292  795222 kubeadm.go:319] 
	I0111 09:10:17.745379  795222 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 09:10:17.745506  795222 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 09:10:17.745610  795222 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 09:10:17.745648  795222 kubeadm.go:319] 
	I0111 09:10:17.745765  795222 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 09:10:17.745899  795222 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 09:10:17.745912  795222 kubeadm.go:319] 
	I0111 09:10:17.746156  795222 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zs68fl.2gyixjjdurk170u7 \
	I0111 09:10:17.746280  795222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb \
	I0111 09:10:17.746302  795222 kubeadm.go:319] 	--control-plane 
	I0111 09:10:17.746306  795222 kubeadm.go:319] 
	I0111 09:10:17.746409  795222 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 09:10:17.746413  795222 kubeadm.go:319] 
	I0111 09:10:17.746508  795222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zs68fl.2gyixjjdurk170u7 \
	I0111 09:10:17.746644  795222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb 
	I0111 09:10:17.750699  795222 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:10:17.751118  795222 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:10:17.751234  795222 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:10:17.751257  795222 cni.go:84] Creating CNI manager for ""
	I0111 09:10:17.751264  795222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:10:17.756229  795222 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 09:10:17.759172  795222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 09:10:17.763280  795222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 09:10:17.763302  795222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 09:10:17.777152  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 09:10:18.079462  795222 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 09:10:18.079583  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:18.079603  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-193049 minikube.k8s.io/updated_at=2026_01_11T09_10_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=newest-cni-193049 minikube.k8s.io/primary=true
	I0111 09:10:18.240335  795222 ops.go:34] apiserver oom_adj: -16
	I0111 09:10:18.240428  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:18.741186  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:19.240958  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:19.741335  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:20.241479  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:20.740942  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:21.241263  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:21.740619  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:22.240839  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:22.397596  795222 kubeadm.go:1114] duration metric: took 4.318072401s to wait for elevateKubeSystemPrivileges
	I0111 09:10:22.397625  795222 kubeadm.go:403] duration metric: took 17.629673831s to StartCluster
	I0111 09:10:22.397642  795222 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:22.397703  795222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:10:22.398632  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:22.398841  795222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:10:22.398922  795222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 09:10:22.399158  795222 config.go:182] Loaded profile config "newest-cni-193049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:10:22.399195  795222 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:10:22.399250  795222 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-193049"
	I0111 09:10:22.399264  795222 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-193049"
	I0111 09:10:22.399285  795222 host.go:66] Checking if "newest-cni-193049" exists ...
	I0111 09:10:22.399786  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:10:22.400741  795222 addons.go:70] Setting default-storageclass=true in profile "newest-cni-193049"
	I0111 09:10:22.400765  795222 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-193049"
	I0111 09:10:22.401071  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:10:22.404322  795222 out.go:179] * Verifying Kubernetes components...
	I0111 09:10:22.408039  795222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:10:22.435055  795222 addons.go:239] Setting addon default-storageclass=true in "newest-cni-193049"
	I0111 09:10:22.435097  795222 host.go:66] Checking if "newest-cni-193049" exists ...
	I0111 09:10:22.435520  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:10:22.456262  795222 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:10:22.462307  795222 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:10:22.462334  795222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:10:22.462402  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:22.480793  795222 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:10:22.480815  795222 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:10:22.480878  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:22.506248  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:22.527400  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:22.777664  795222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 09:10:22.777829  795222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:10:22.847488  795222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:10:22.853408  795222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:10:23.194302  795222 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:10:23.194410  795222 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0111 09:10:23.196073  795222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:10:23.599261  795222 api_server.go:72] duration metric: took 1.200394024s to wait for apiserver process to appear ...
	I0111 09:10:23.599341  795222 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:10:23.599372  795222 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:10:23.614982  795222 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 09:10:23.617043  795222 api_server.go:141] control plane version: v1.35.0
	I0111 09:10:23.617066  795222 api_server.go:131] duration metric: took 17.704465ms to wait for apiserver health ...
	I0111 09:10:23.617075  795222 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:10:23.625419  795222 system_pods.go:59] 8 kube-system pods found
	I0111 09:10:23.625507  795222 system_pods.go:61] "coredns-7d764666f9-4qsbm" [8662ede8-99ed-41d7-a141-89503f63b4e0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 09:10:23.625531  795222 system_pods.go:61] "etcd-newest-cni-193049" [a4912791-1140-4aa0-945b-575738a94e8f] Running
	I0111 09:10:23.625578  795222 system_pods.go:61] "kindnet-nnd7m" [5dc3259e-2cc0-400d-b23f-8e9c3620cf32] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 09:10:23.625611  795222 system_pods.go:61] "kube-apiserver-newest-cni-193049" [46ff78e7-d56a-4b2c-8f53-9ee776ca8da3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:10:23.625659  795222 system_pods.go:61] "kube-controller-manager-newest-cni-193049" [48de95e0-e1e4-4ae4-93b3-5ddd0bab2034] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:10:23.625687  795222 system_pods.go:61] "kube-proxy-nvrgg" [e7eff21d-1b08-4787-ae22-091ae53fe50c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 09:10:23.625707  795222 system_pods.go:61] "kube-scheduler-newest-cni-193049" [6e9e362a-8cdc-49d5-95e7-984ebf01ce4b] Running
	I0111 09:10:23.625744  795222 system_pods.go:61] "storage-provisioner" [de2a52c5-86cc-4d8a-a725-505c47a2e932] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 09:10:23.625770  795222 system_pods.go:74] duration metric: took 8.687538ms to wait for pod list to return data ...
	I0111 09:10:23.625793  795222 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:10:23.626910  795222 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0111 09:10:23.630022  795222 addons.go:530] duration metric: took 1.230823846s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 09:10:23.634071  795222 default_sa.go:45] found service account: "default"
	I0111 09:10:23.634171  795222 default_sa.go:55] duration metric: took 8.3435ms for default service account to be created ...
	I0111 09:10:23.634201  795222 kubeadm.go:587] duration metric: took 1.235335601s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 09:10:23.634247  795222 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:10:23.639980  795222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:10:23.640007  795222 node_conditions.go:123] node cpu capacity is 2
	I0111 09:10:23.640019  795222 node_conditions.go:105] duration metric: took 5.750252ms to run NodePressure ...
	I0111 09:10:23.640032  795222 start.go:242] waiting for startup goroutines ...
	I0111 09:10:23.698670  795222 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-193049" context rescaled to 1 replicas
	I0111 09:10:23.698751  795222 start.go:247] waiting for cluster config update ...
	I0111 09:10:23.698778  795222 start.go:256] writing updated cluster config ...
	I0111 09:10:23.699098  795222 ssh_runner.go:195] Run: rm -f paused
	I0111 09:10:23.803435  795222 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:10:23.806299  795222 out.go:203] 
	W0111 09:10:23.809541  795222 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:10:23.812741  795222 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:10:23.815869  795222 out.go:179] * Done! kubectl is now configured to use "newest-cni-193049" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 09:10:11 newest-cni-193049 crio[833]: time="2026-01-11T09:10:11.258538361Z" level=info msg="Created container 88579f435a5590691f66bad2f52b1669be0aa08eba7006a609f63be759842b16: kube-system/kube-apiserver-newest-cni-193049/kube-apiserver" id=f7cdc663-769b-481a-a96d-298bf8009e65 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:11 newest-cni-193049 crio[833]: time="2026-01-11T09:10:11.260018512Z" level=info msg="Starting container: 88579f435a5590691f66bad2f52b1669be0aa08eba7006a609f63be759842b16" id=175136c5-9b2e-4c10-b112-f6fd2945ffd0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:11 newest-cni-193049 crio[833]: time="2026-01-11T09:10:11.263063466Z" level=info msg="Started container" PID=1243 containerID=88579f435a5590691f66bad2f52b1669be0aa08eba7006a609f63be759842b16 description=kube-system/kube-apiserver-newest-cni-193049/kube-apiserver id=175136c5-9b2e-4c10-b112-f6fd2945ffd0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b795338958f9904ce4a3723825ab09128ff1ae37d80ac21e02792314abed2a6a
	Jan 11 09:10:22 newest-cni-193049 crio[833]: time="2026-01-11T09:10:22.684089311Z" level=info msg="Running pod sandbox: kube-system/kindnet-nnd7m/POD" id=89fa8aa9-772d-4872-b914-b69bdf8b0bee name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:22 newest-cni-193049 crio[833]: time="2026-01-11T09:10:22.684204496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:22 newest-cni-193049 crio[833]: time="2026-01-11T09:10:22.694462277Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=89fa8aa9-772d-4872-b914-b69bdf8b0bee name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:22 newest-cni-193049 crio[833]: time="2026-01-11T09:10:22.700559078Z" level=info msg="Ran pod sandbox 20a47b6097b59d844e860dd00646f19b7ac3627ab4c95c336d0cb47664f01ba6 with infra container: kube-system/kindnet-nnd7m/POD" id=89fa8aa9-772d-4872-b914-b69bdf8b0bee name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:22 newest-cni-193049 crio[833]: time="2026-01-11T09:10:22.702347687Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=a6a798fc-94b8-4ca2-891c-006340102aaa name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:22 newest-cni-193049 crio[833]: time="2026-01-11T09:10:22.702632154Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=a6a798fc-94b8-4ca2-891c-006340102aaa name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:22 newest-cni-193049 crio[833]: time="2026-01-11T09:10:22.702883218Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=a6a798fc-94b8-4ca2-891c-006340102aaa name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:22 newest-cni-193049 crio[833]: time="2026-01-11T09:10:22.706636719Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=74bac2a4-8340-47cb-9781-faeb6a9004df name=/runtime.v1.ImageService/PullImage
	Jan 11 09:10:22 newest-cni-193049 crio[833]: time="2026-01-11T09:10:22.710302169Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.247237433Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-nvrgg/POD" id=f6c2eb1d-3d96-4e85-9810-ac31ccf36c91 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.247775368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.265717022Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f6c2eb1d-3d96-4e85-9810-ac31ccf36c91 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.293116126Z" level=info msg="Ran pod sandbox 82c0e0bfc366598ea4afef36235fc5f09222a744199c65dcd46bdfebd5050897 with infra container: kube-system/kube-proxy-nvrgg/POD" id=f6c2eb1d-3d96-4e85-9810-ac31ccf36c91 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.297984447Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=88b8d2c7-234c-4d81-852e-790c761e11e0 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.301094822Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=c2931af6-151a-4e06-98e1-b87eca55b244 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.307329273Z" level=info msg="Creating container: kube-system/kube-proxy-nvrgg/kube-proxy" id=58403acf-4323-44c8-9459-665b7732a49f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.308114481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.333821453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.339543012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.424946119Z" level=info msg="Created container 7dd2ea54de05d01b04c5209268634495eb3efbcd6ed4e366708e8da5d8766d2d: kube-system/kube-proxy-nvrgg/kube-proxy" id=58403acf-4323-44c8-9459-665b7732a49f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.425971535Z" level=info msg="Starting container: 7dd2ea54de05d01b04c5209268634495eb3efbcd6ed4e366708e8da5d8766d2d" id=eb40854c-42fc-42ac-afa4-54107c96f077 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:23 newest-cni-193049 crio[833]: time="2026-01-11T09:10:23.438556123Z" level=info msg="Started container" PID=1493 containerID=7dd2ea54de05d01b04c5209268634495eb3efbcd6ed4e366708e8da5d8766d2d description=kube-system/kube-proxy-nvrgg/kube-proxy id=eb40854c-42fc-42ac-afa4-54107c96f077 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82c0e0bfc366598ea4afef36235fc5f09222a744199c65dcd46bdfebd5050897
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7dd2ea54de05d       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   2 seconds ago       Running             kube-proxy                0                   82c0e0bfc3665       kube-proxy-nvrgg                            kube-system
	88579f435a559       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   14 seconds ago      Running             kube-apiserver            0                   b795338958f99       kube-apiserver-newest-cni-193049            kube-system
	ff994631130d5       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   14 seconds ago      Running             kube-scheduler            0                   a096771430aa3       kube-scheduler-newest-cni-193049            kube-system
	6bada2fc50015       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   14 seconds ago      Running             etcd                      0                   d0ad5f5be42d5       etcd-newest-cni-193049                      kube-system
	97585779d2afe       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   14 seconds ago      Running             kube-controller-manager   0                   92fbb54b5cabd       kube-controller-manager-newest-cni-193049   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-193049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-193049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=newest-cni-193049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_10_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:10:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-193049
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:10:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:10:17 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:10:17 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:10:17 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 11 Jan 2026 09:10:17 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-193049
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                fd89335c-cfbd-4c1f-a796-6c2f717b69b5
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-193049                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-nnd7m                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-193049             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-193049    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-nvrgg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-193049             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-193049 event: Registered Node newest-cni-193049 in Controller
	
	
	==> dmesg <==
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	[Jan11 09:08] overlayfs: idmapped layers are currently not supported
	[ +12.491892] overlayfs: idmapped layers are currently not supported
	[Jan11 09:09] overlayfs: idmapped layers are currently not supported
	[Jan11 09:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6bada2fc5001553a8110ef674fe7a14ffbc8cd4114893d993da57fb8d48c3c46] <==
	{"level":"info","ts":"2026-01-11T09:10:11.349094Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T09:10:12.012350Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-11T09:10:12.012519Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-11T09:10:12.012608Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-11T09:10:12.012681Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:10:12.012744Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:12.013870Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:12.013956Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:10:12.014016Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:12.014055Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:12.015578Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:10:12.018785Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-193049 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:10:12.019026Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:10:12.019179Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:10:12.019268Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:10:12.019305Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-11T09:10:12.019631Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:10:12.019759Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:10:12.019820Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-11T09:10:12.019888Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-11T09:10:12.026731Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-11T09:10:12.027664Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:10:12.034524Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T09:10:12.046765Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:10:12.047572Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:10:26 up  3:52,  0 user,  load average: 3.74, 2.41, 2.10
	Linux newest-cni-193049 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [88579f435a5590691f66bad2f52b1669be0aa08eba7006a609f63be759842b16] <==
	I0111 09:10:14.482358       1 policy_source.go:248] refreshing policies
	E0111 09:10:14.492412       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0111 09:10:14.511217       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:10:14.517366       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	E0111 09:10:14.536947       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0111 09:10:14.540830       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:10:14.576654       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:10:14.698656       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:10:15.146439       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0111 09:10:15.152438       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0111 09:10:15.152529       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:10:15.977205       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:10:16.049038       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:10:16.153900       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0111 09:10:16.161316       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0111 09:10:16.162490       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 09:10:16.167682       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:10:16.405751       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:10:17.181234       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:10:17.207445       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0111 09:10:17.220933       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0111 09:10:21.957287       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:10:21.966023       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:10:22.152183       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 09:10:22.253625       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [97585779d2afe7af0d48d6c318cc0ae806814d0d768b71fa4111efde9e21ccd7] <==
	I0111 09:10:21.232950       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.234642       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.235159       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.235945       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.236825       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-193049" podCIDRs=["10.42.0.0/24"]
	I0111 09:10:21.243326       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243360       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243380       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243407       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243429       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243515       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 09:10:21.243589       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-193049"
	I0111 09:10:21.243631       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243658       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243669       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243688       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243706       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243779       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.243862       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.249513       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 09:10:21.250895       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.251188       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:21.251216       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:10:21.251223       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:10:21.313111       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [7dd2ea54de05d01b04c5209268634495eb3efbcd6ed4e366708e8da5d8766d2d] <==
	I0111 09:10:23.515598       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:10:23.647755       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:10:23.748221       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:23.748257       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 09:10:23.748324       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:10:23.817915       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:10:23.817971       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:10:23.823050       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:10:23.823348       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:10:23.823361       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:10:23.824676       1 config.go:200] "Starting service config controller"
	I0111 09:10:23.824686       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:10:23.824702       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:10:23.824706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:10:23.824725       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:10:23.824729       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:10:23.825341       1 config.go:309] "Starting node config controller"
	I0111 09:10:23.825348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:10:23.825354       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:10:23.926246       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 09:10:23.926281       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 09:10:23.926308       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ff994631130d55e4d1d78278009580ad662302fdb458b04a03fdb7ef6d17e00d] <==
	E0111 09:10:14.498257       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 09:10:14.498442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 09:10:14.498602       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0111 09:10:14.498736       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0111 09:10:14.498852       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 09:10:14.499040       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0111 09:10:14.499129       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0111 09:10:14.499235       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 09:10:14.499307       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 09:10:14.499390       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 09:10:14.499478       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0111 09:10:14.499589       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0111 09:10:14.499733       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0111 09:10:14.499813       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 09:10:14.499857       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0111 09:10:15.323687       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0111 09:10:15.325425       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0111 09:10:15.354076       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0111 09:10:15.412841       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0111 09:10:15.423130       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0111 09:10:15.524688       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0111 09:10:15.528970       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0111 09:10:15.592628       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0111 09:10:15.647973       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I0111 09:10:17.260926       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:10:18 newest-cni-193049 kubelet[1294]: I0111 09:10:18.441341    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-193049" podStartSLOduration=1.441324249 podStartE2EDuration="1.441324249s" podCreationTimestamp="2026-01-11 09:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:10:18.422006544 +0000 UTC m=+1.415163831" watchObservedRunningTime="2026-01-11 09:10:18.441324249 +0000 UTC m=+1.434481535"
	Jan 11 09:10:18 newest-cni-193049 kubelet[1294]: I0111 09:10:18.461974    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-193049" podStartSLOduration=1.461956858 podStartE2EDuration="1.461956858s" podCreationTimestamp="2026-01-11 09:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:10:18.461157693 +0000 UTC m=+1.454314980" watchObservedRunningTime="2026-01-11 09:10:18.461956858 +0000 UTC m=+1.455114145"
	Jan 11 09:10:18 newest-cni-193049 kubelet[1294]: I0111 09:10:18.462113    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-193049" podStartSLOduration=1.462107818 podStartE2EDuration="1.462107818s" podCreationTimestamp="2026-01-11 09:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-11 09:10:18.442109178 +0000 UTC m=+1.435266464" watchObservedRunningTime="2026-01-11 09:10:18.462107818 +0000 UTC m=+1.455265105"
	Jan 11 09:10:19 newest-cni-193049 kubelet[1294]: E0111 09:10:19.256952    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-193049" containerName="kube-apiserver"
	Jan 11 09:10:19 newest-cni-193049 kubelet[1294]: E0111 09:10:19.257864    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-193049" containerName="kube-scheduler"
	Jan 11 09:10:19 newest-cni-193049 kubelet[1294]: E0111 09:10:19.259109    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-193049" containerName="etcd"
	Jan 11 09:10:20 newest-cni-193049 kubelet[1294]: E0111 09:10:20.260340    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-193049" containerName="kube-apiserver"
	Jan 11 09:10:20 newest-cni-193049 kubelet[1294]: E0111 09:10:20.261438    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-193049" containerName="kube-scheduler"
	Jan 11 09:10:21 newest-cni-193049 kubelet[1294]: I0111 09:10:21.240059    1294 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 11 09:10:21 newest-cni-193049 kubelet[1294]: I0111 09:10:21.240781    1294 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 11 09:10:21 newest-cni-193049 kubelet[1294]: E0111 09:10:21.431505    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-193049" containerName="kube-controller-manager"
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: I0111 09:10:22.366375    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7eff21d-1b08-4787-ae22-091ae53fe50c-kube-proxy\") pod \"kube-proxy-nvrgg\" (UID: \"e7eff21d-1b08-4787-ae22-091ae53fe50c\") " pod="kube-system/kube-proxy-nvrgg"
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: I0111 09:10:22.366430    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldp4w\" (UniqueName: \"kubernetes.io/projected/e7eff21d-1b08-4787-ae22-091ae53fe50c-kube-api-access-ldp4w\") pod \"kube-proxy-nvrgg\" (UID: \"e7eff21d-1b08-4787-ae22-091ae53fe50c\") " pod="kube-system/kube-proxy-nvrgg"
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: I0111 09:10:22.366497    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7eff21d-1b08-4787-ae22-091ae53fe50c-xtables-lock\") pod \"kube-proxy-nvrgg\" (UID: \"e7eff21d-1b08-4787-ae22-091ae53fe50c\") " pod="kube-system/kube-proxy-nvrgg"
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: I0111 09:10:22.366517    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7eff21d-1b08-4787-ae22-091ae53fe50c-lib-modules\") pod \"kube-proxy-nvrgg\" (UID: \"e7eff21d-1b08-4787-ae22-091ae53fe50c\") " pod="kube-system/kube-proxy-nvrgg"
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: I0111 09:10:22.467900    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-xtables-lock\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: I0111 09:10:22.467978    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-lib-modules\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: I0111 09:10:22.468009    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-cni-cfg\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: I0111 09:10:22.468028    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kct5r\" (UniqueName: \"kubernetes.io/projected/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-kube-api-access-kct5r\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: E0111 09:10:22.560910    1294 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: E0111 09:10:22.560964    1294 projected.go:196] Error preparing data for projected volume kube-api-access-ldp4w for pod kube-system/kube-proxy-nvrgg: configmap "kube-root-ca.crt" not found
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: E0111 09:10:22.561055    1294 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7eff21d-1b08-4787-ae22-091ae53fe50c-kube-api-access-ldp4w podName:e7eff21d-1b08-4787-ae22-091ae53fe50c nodeName:}" failed. No retries permitted until 2026-01-11 09:10:23.061014319 +0000 UTC m=+6.054171606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ldp4w" (UniqueName: "kubernetes.io/projected/e7eff21d-1b08-4787-ae22-091ae53fe50c-kube-api-access-ldp4w") pod "kube-proxy-nvrgg" (UID: "e7eff21d-1b08-4787-ae22-091ae53fe50c") : configmap "kube-root-ca.crt" not found
	Jan 11 09:10:22 newest-cni-193049 kubelet[1294]: I0111 09:10:22.656099    1294 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 11 09:10:23 newest-cni-193049 kubelet[1294]: W0111 09:10:23.280341    1294 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/crio-82c0e0bfc366598ea4afef36235fc5f09222a744199c65dcd46bdfebd5050897 WatchSource:0}: Error finding container 82c0e0bfc366598ea4afef36235fc5f09222a744199c65dcd46bdfebd5050897: Status 404 returned error can't find the container with id 82c0e0bfc366598ea4afef36235fc5f09222a744199c65dcd46bdfebd5050897
	Jan 11 09:10:23 newest-cni-193049 kubelet[1294]: E0111 09:10:23.749317    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-193049" containerName="kube-scheduler"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-193049 -n newest-cni-193049
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-193049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-4qsbm storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-193049 describe pod coredns-7d764666f9-4qsbm storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-193049 describe pod coredns-7d764666f9-4qsbm storage-provisioner: exit status 1 (87.545953ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4qsbm" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-193049 describe pod coredns-7d764666f9-4qsbm storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-588333 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-588333 --alsologtostderr -v=1: exit status 80 (2.351449344s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-588333 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 09:10:25.922549  798086 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:10:25.922663  798086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:25.922669  798086 out.go:374] Setting ErrFile to fd 2...
	I0111 09:10:25.922675  798086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:25.923028  798086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:10:25.923319  798086 out.go:368] Setting JSON to false
	I0111 09:10:25.923340  798086 mustload.go:66] Loading cluster: default-k8s-diff-port-588333
	I0111 09:10:25.923998  798086 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:10:25.924656  798086 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-588333 --format={{.State.Status}}
	I0111 09:10:25.951835  798086 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:10:25.952167  798086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:10:26.034072  798086 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-11 09:10:26.021127467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:10:26.034812  798086 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-588333 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 09:10:26.040289  798086 out.go:179] * Pausing node default-k8s-diff-port-588333 ... 
	I0111 09:10:26.043241  798086 host.go:66] Checking if "default-k8s-diff-port-588333" exists ...
	I0111 09:10:26.043623  798086 ssh_runner.go:195] Run: systemctl --version
	I0111 09:10:26.043680  798086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-588333
	I0111 09:10:26.074725  798086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/default-k8s-diff-port-588333/id_rsa Username:docker}
	I0111 09:10:26.181039  798086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:10:26.213126  798086 pause.go:52] kubelet running: true
	I0111 09:10:26.213194  798086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:10:26.741686  798086 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:10:26.741768  798086 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:10:26.833710  798086 cri.go:96] found id: "c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6"
	I0111 09:10:26.833741  798086 cri.go:96] found id: "0df8b9a4c407e9731f6c71eb4095846aa25699d52db9617bae6bddb3f9f569f8"
	I0111 09:10:26.833746  798086 cri.go:96] found id: "79ce94787b5bc8ba39984fee7fa881de863dc7f27491e0c59f25f1604967629f"
	I0111 09:10:26.833750  798086 cri.go:96] found id: "7c6183b6143a56c781eb96c23625b670fed80c64491110815f034bceab591fa0"
	I0111 09:10:26.833753  798086 cri.go:96] found id: "e4ae94f42bebfc7b29b6ffa9b2d76e2ad73831ebbfa9b4f121acaf89c0718ec9"
	I0111 09:10:26.833756  798086 cri.go:96] found id: "e7c36bd895a088cee8bf01ae0ac34e5e8eb26713282675fc6d4788401b926477"
	I0111 09:10:26.833763  798086 cri.go:96] found id: "076f1fdf555e06139c3c03315dc96b76587a0287090e0f05c5db8be14ea7a439"
	I0111 09:10:26.833767  798086 cri.go:96] found id: "2ae07275c0ab7a01e2063bff151242ed44a810e62093e55ae36786b9db6a2095"
	I0111 09:10:26.833770  798086 cri.go:96] found id: "6f627745c3daad695d0b29049d2cbdb0651dcdbf59d1dfadfe4715bf0735f857"
	I0111 09:10:26.833776  798086 cri.go:96] found id: "ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d"
	I0111 09:10:26.833779  798086 cri.go:96] found id: "c124cf621b7bef383c51a20213d6f99c43490bc9138b2c57b8e874f868d88edf"
	I0111 09:10:26.833782  798086 cri.go:96] found id: ""
	I0111 09:10:26.833831  798086 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:10:26.848210  798086 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:10:26Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:10:27.179443  798086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:10:27.199632  798086 pause.go:52] kubelet running: false
	I0111 09:10:27.199693  798086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:10:27.423779  798086 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:10:27.423858  798086 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:10:27.555015  798086 cri.go:96] found id: "c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6"
	I0111 09:10:27.555034  798086 cri.go:96] found id: "0df8b9a4c407e9731f6c71eb4095846aa25699d52db9617bae6bddb3f9f569f8"
	I0111 09:10:27.555039  798086 cri.go:96] found id: "79ce94787b5bc8ba39984fee7fa881de863dc7f27491e0c59f25f1604967629f"
	I0111 09:10:27.555042  798086 cri.go:96] found id: "7c6183b6143a56c781eb96c23625b670fed80c64491110815f034bceab591fa0"
	I0111 09:10:27.555046  798086 cri.go:96] found id: "e4ae94f42bebfc7b29b6ffa9b2d76e2ad73831ebbfa9b4f121acaf89c0718ec9"
	I0111 09:10:27.555049  798086 cri.go:96] found id: "e7c36bd895a088cee8bf01ae0ac34e5e8eb26713282675fc6d4788401b926477"
	I0111 09:10:27.555052  798086 cri.go:96] found id: "076f1fdf555e06139c3c03315dc96b76587a0287090e0f05c5db8be14ea7a439"
	I0111 09:10:27.555055  798086 cri.go:96] found id: "2ae07275c0ab7a01e2063bff151242ed44a810e62093e55ae36786b9db6a2095"
	I0111 09:10:27.555058  798086 cri.go:96] found id: "6f627745c3daad695d0b29049d2cbdb0651dcdbf59d1dfadfe4715bf0735f857"
	I0111 09:10:27.555064  798086 cri.go:96] found id: "ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d"
	I0111 09:10:27.555067  798086 cri.go:96] found id: "c124cf621b7bef383c51a20213d6f99c43490bc9138b2c57b8e874f868d88edf"
	I0111 09:10:27.555070  798086 cri.go:96] found id: ""
	I0111 09:10:27.555120  798086 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:10:27.834242  798086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:10:27.848124  798086 pause.go:52] kubelet running: false
	I0111 09:10:27.848203  798086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:10:28.023532  798086 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:10:28.023642  798086 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:10:28.167277  798086 cri.go:96] found id: "c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6"
	I0111 09:10:28.167295  798086 cri.go:96] found id: "0df8b9a4c407e9731f6c71eb4095846aa25699d52db9617bae6bddb3f9f569f8"
	I0111 09:10:28.167299  798086 cri.go:96] found id: "79ce94787b5bc8ba39984fee7fa881de863dc7f27491e0c59f25f1604967629f"
	I0111 09:10:28.167303  798086 cri.go:96] found id: "7c6183b6143a56c781eb96c23625b670fed80c64491110815f034bceab591fa0"
	I0111 09:10:28.167306  798086 cri.go:96] found id: "e4ae94f42bebfc7b29b6ffa9b2d76e2ad73831ebbfa9b4f121acaf89c0718ec9"
	I0111 09:10:28.167309  798086 cri.go:96] found id: "e7c36bd895a088cee8bf01ae0ac34e5e8eb26713282675fc6d4788401b926477"
	I0111 09:10:28.167312  798086 cri.go:96] found id: "076f1fdf555e06139c3c03315dc96b76587a0287090e0f05c5db8be14ea7a439"
	I0111 09:10:28.167315  798086 cri.go:96] found id: "2ae07275c0ab7a01e2063bff151242ed44a810e62093e55ae36786b9db6a2095"
	I0111 09:10:28.167318  798086 cri.go:96] found id: "6f627745c3daad695d0b29049d2cbdb0651dcdbf59d1dfadfe4715bf0735f857"
	I0111 09:10:28.167324  798086 cri.go:96] found id: "ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d"
	I0111 09:10:28.167331  798086 cri.go:96] found id: "c124cf621b7bef383c51a20213d6f99c43490bc9138b2c57b8e874f868d88edf"
	I0111 09:10:28.167334  798086 cri.go:96] found id: ""
	I0111 09:10:28.167384  798086 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:10:28.189446  798086 out.go:203] 
	W0111 09:10:28.192618  798086 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:10:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:10:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 09:10:28.192649  798086 out.go:285] * 
	* 
	W0111 09:10:28.198669  798086 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 09:10:28.201854  798086 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-588333 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-588333
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-588333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f",
	        "Created": "2026-01-11T09:08:13.612670128Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 791778,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:09:22.245236941Z",
	            "FinishedAt": "2026-01-11T09:09:21.433774853Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/hosts",
	        "LogPath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f-json.log",
	        "Name": "/default-k8s-diff-port-588333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-588333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-588333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f",
	                "LowerDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-588333",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-588333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-588333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-588333",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-588333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13fa7766e1242e379caf21679910d0f63459c71a65b68c919e160f50c50269c0",
	            "SandboxKey": "/var/run/docker/netns/13fa7766e124",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-588333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:34:ad:f5:b5:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa19db219143297e6d2133400cad3ab3e7355f9d99472fad6a65d0a14f403a70",
	                    "EndpointID": "9ce16b287b16ca9935825217865e149bf93c6fdf3eac19473218733c680ebd25",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-588333",
	                        "ed1214141656"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333: exit status 2 (494.098134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-588333 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-588333 logs -n 25: (1.819927925s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-236664 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │                     │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:08 UTC │
	│ ssh     │ force-systemd-flag-630015 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p force-systemd-flag-630015                                                                                                                                                                                                                  │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p disable-driver-mounts-781777                                                                                                                                                                                                               │ disable-driver-mounts-781777 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │                     │
	│ stop    │ -p embed-certs-630626 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-630626 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-588333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-588333 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-588333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:10 UTC │
	│ image   │ embed-certs-630626 image list --format=json                                                                                                                                                                                                   │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ pause   │ -p embed-certs-630626 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ delete  │ -p embed-certs-630626                                                                                                                                                                                                                         │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ delete  │ -p embed-certs-630626                                                                                                                                                                                                                         │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:10 UTC │
	│ addons  │ enable metrics-server -p newest-cni-193049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ image   │ default-k8s-diff-port-588333 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ pause   │ -p default-k8s-diff-port-588333 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ stop    │ -p newest-cni-193049 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:09:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:09:50.385936  795222 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:09:50.386413  795222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:50.386421  795222 out.go:374] Setting ErrFile to fd 2...
	I0111 09:09:50.386427  795222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:09:50.387027  795222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:09:50.387648  795222 out.go:368] Setting JSON to false
	I0111 09:09:50.391812  795222 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13940,"bootTime":1768108650,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:09:50.392566  795222 start.go:143] virtualization:  
	I0111 09:09:50.395980  795222 out.go:179] * [newest-cni-193049] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:09:50.400151  795222 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:09:50.400495  795222 notify.go:221] Checking for updates...
	I0111 09:09:50.406640  795222 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:09:50.410558  795222 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:09:50.414364  795222 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:09:50.418529  795222 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:09:50.422744  795222 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:09:50.427403  795222 config.go:182] Loaded profile config "default-k8s-diff-port-588333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:09:50.427519  795222 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:09:50.486944  795222 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:09:50.487156  795222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:09:50.592261  795222 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 09:09:50.580785318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:09:50.592369  795222 docker.go:319] overlay module found
	I0111 09:09:50.596178  795222 out.go:179] * Using the docker driver based on user configuration
	I0111 09:09:50.599095  795222 start.go:309] selected driver: docker
	I0111 09:09:50.599117  795222 start.go:928] validating driver "docker" against <nil>
	I0111 09:09:50.599131  795222 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:09:50.599843  795222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:09:50.700670  795222 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 09:09:50.691856398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:09:50.700813  795222 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0111 09:09:50.700836  795222 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0111 09:09:50.701052  795222 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 09:09:50.704049  795222 out.go:179] * Using Docker driver with root privileges
	I0111 09:09:50.706852  795222 cni.go:84] Creating CNI manager for ""
	I0111 09:09:50.706912  795222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:09:50.706936  795222 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 09:09:50.707020  795222 start.go:353] cluster config:
	{Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:09:50.710085  795222 out.go:179] * Starting "newest-cni-193049" primary control-plane node in "newest-cni-193049" cluster
	I0111 09:09:50.712878  795222 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:09:50.715792  795222 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:09:50.718616  795222 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:09:50.718665  795222 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:09:50.718675  795222 cache.go:65] Caching tarball of preloaded images
	I0111 09:09:50.718780  795222 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:09:50.718791  795222 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 09:09:50.718942  795222 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:09:50.719200  795222 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/config.json ...
	I0111 09:09:50.719233  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/config.json: {Name:mk299c7cbb34a339c1735751e4dbb1bf3f8d929c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:09:50.766929  795222 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:09:50.766952  795222 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:09:50.766972  795222 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:09:50.767009  795222 start.go:360] acquireMachinesLock for newest-cni-193049: {Name:mkf4b4913de610081a1f70a8057cb410a71fc0bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:09:50.767127  795222 start.go:364] duration metric: took 97.749µs to acquireMachinesLock for "newest-cni-193049"
	I0111 09:09:50.767158  795222 start.go:93] Provisioning new machine with config: &{Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:09:50.767233  795222 start.go:125] createHost starting for "" (driver="docker")
	W0111 09:09:48.391636  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:09:50.891454  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:09:50.770736  795222 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 09:09:50.770974  795222 start.go:159] libmachine.API.Create for "newest-cni-193049" (driver="docker")
	I0111 09:09:50.771008  795222 client.go:173] LocalClient.Create starting
	I0111 09:09:50.771083  795222 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem
	I0111 09:09:50.771127  795222 main.go:144] libmachine: Decoding PEM data...
	I0111 09:09:50.771142  795222 main.go:144] libmachine: Parsing certificate...
	I0111 09:09:50.771197  795222 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem
	I0111 09:09:50.771220  795222 main.go:144] libmachine: Decoding PEM data...
	I0111 09:09:50.771232  795222 main.go:144] libmachine: Parsing certificate...
	I0111 09:09:50.771608  795222 cli_runner.go:164] Run: docker network inspect newest-cni-193049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 09:09:50.791729  795222 cli_runner.go:211] docker network inspect newest-cni-193049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 09:09:50.791819  795222 network_create.go:284] running [docker network inspect newest-cni-193049] to gather additional debugging logs...
	I0111 09:09:50.791844  795222 cli_runner.go:164] Run: docker network inspect newest-cni-193049
	W0111 09:09:50.808262  795222 cli_runner.go:211] docker network inspect newest-cni-193049 returned with exit code 1
	I0111 09:09:50.808297  795222 network_create.go:287] error running [docker network inspect newest-cni-193049]: docker network inspect newest-cni-193049: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-193049 not found
	I0111 09:09:50.808310  795222 network_create.go:289] output of [docker network inspect newest-cni-193049]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-193049 not found
	
	** /stderr **
	I0111 09:09:50.808423  795222 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:09:50.824234  795222 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
	I0111 09:09:50.824565  795222 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-461c1a9d970d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:7e:25:fe:d0:0d} reservation:<nil>}
	I0111 09:09:50.824898  795222 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38e10816f85 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:42:af:ae:32:ae} reservation:<nil>}
	I0111 09:09:50.825179  795222 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fa19db219143 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:5f:6b:c8:86:a5} reservation:<nil>}
	I0111 09:09:50.825574  795222 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197b140}
	I0111 09:09:50.825601  795222 network_create.go:124] attempt to create docker network newest-cni-193049 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 09:09:50.825663  795222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-193049 newest-cni-193049
	I0111 09:09:50.892444  795222 network_create.go:108] docker network newest-cni-193049 192.168.85.0/24 created
	I0111 09:09:50.892475  795222 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-193049" container
	I0111 09:09:50.892561  795222 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 09:09:50.911847  795222 cli_runner.go:164] Run: docker volume create newest-cni-193049 --label name.minikube.sigs.k8s.io=newest-cni-193049 --label created_by.minikube.sigs.k8s.io=true
	I0111 09:09:50.931715  795222 oci.go:103] Successfully created a docker volume newest-cni-193049
	I0111 09:09:50.931797  795222 cli_runner.go:164] Run: docker run --rm --name newest-cni-193049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-193049 --entrypoint /usr/bin/test -v newest-cni-193049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 09:09:51.767625  795222 oci.go:107] Successfully prepared a docker volume newest-cni-193049
	I0111 09:09:51.767692  795222 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:09:51.767702  795222 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 09:09:51.767783  795222 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-193049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	W0111 09:09:53.391517  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:09:55.891786  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:09:55.855837  795222 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-193049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.087990293s)
	I0111 09:09:55.855885  795222 kic.go:203] duration metric: took 4.088179465s to extract preloaded images to volume ...
	W0111 09:09:55.856030  795222 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 09:09:55.856142  795222 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 09:09:55.915816  795222 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-193049 --name newest-cni-193049 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-193049 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-193049 --network newest-cni-193049 --ip 192.168.85.2 --volume newest-cni-193049:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 09:09:56.242985  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Running}}
	I0111 09:09:56.271620  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:09:56.291601  795222 cli_runner.go:164] Run: docker exec newest-cni-193049 stat /var/lib/dpkg/alternatives/iptables
	I0111 09:09:56.344044  795222 oci.go:144] the created container "newest-cni-193049" has a running status.
	I0111 09:09:56.344071  795222 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa...
	I0111 09:09:56.573630  795222 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 09:09:56.610677  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:09:56.635843  795222 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 09:09:56.635863  795222 kic_runner.go:114] Args: [docker exec --privileged newest-cni-193049 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 09:09:56.687624  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:09:56.713420  795222 machine.go:94] provisionDockerMachine start ...
	I0111 09:09:56.713510  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:09:56.743200  795222 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:56.743539  795222 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I0111 09:09:56.743549  795222 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 09:09:56.744194  795222 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38382->127.0.0.1:33823: read: connection reset by peer
	I0111 09:09:59.894198  795222 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-193049
	
	I0111 09:09:59.894227  795222 ubuntu.go:182] provisioning hostname "newest-cni-193049"
	I0111 09:09:59.894302  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:09:59.912243  795222 main.go:144] libmachine: Using SSH client type: native
	I0111 09:09:59.912566  795222 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I0111 09:09:59.912587  795222 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-193049 && echo "newest-cni-193049" | sudo tee /etc/hostname
	I0111 09:10:00.173158  795222 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-193049
	
	I0111 09:10:00.173259  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:00.309149  795222 main.go:144] libmachine: Using SSH client type: native
	I0111 09:10:00.309504  795222 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I0111 09:10:00.309522  795222 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-193049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-193049/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-193049' | sudo tee -a /etc/hosts; 
				fi
			fi
	W0111 09:09:58.390485  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:10:00.436328  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:10:00.611809  795222 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 09:10:00.611843  795222 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-575040/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-575040/.minikube}
	I0111 09:10:00.611879  795222 ubuntu.go:190] setting up certificates
	I0111 09:10:00.611935  795222 provision.go:84] configureAuth start
	I0111 09:10:00.612039  795222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-193049
	I0111 09:10:00.644171  795222 provision.go:143] copyHostCerts
	I0111 09:10:00.644291  795222 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem, removing ...
	I0111 09:10:00.644317  795222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem
	I0111 09:10:00.644444  795222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/ca.pem (1078 bytes)
	I0111 09:10:00.645577  795222 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem, removing ...
	I0111 09:10:00.645603  795222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem
	I0111 09:10:00.645674  795222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/cert.pem (1123 bytes)
	I0111 09:10:00.645774  795222 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem, removing ...
	I0111 09:10:00.645788  795222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem
	I0111 09:10:00.645819  795222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-575040/.minikube/key.pem (1675 bytes)
	I0111 09:10:00.645888  795222 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem org=jenkins.newest-cni-193049 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-193049]
	I0111 09:10:00.799206  795222 provision.go:177] copyRemoteCerts
	I0111 09:10:00.799276  795222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 09:10:00.799323  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:00.818604  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:00.926554  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 09:10:00.945700  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 09:10:00.968132  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 09:10:00.988340  795222 provision.go:87] duration metric: took 376.37212ms to configureAuth
	I0111 09:10:00.988369  795222 ubuntu.go:206] setting minikube options for container-runtime
	I0111 09:10:00.988595  795222 config.go:182] Loaded profile config "newest-cni-193049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:10:00.988710  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.006699  795222 main.go:144] libmachine: Using SSH client type: native
	I0111 09:10:01.007056  795222 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I0111 09:10:01.007088  795222 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0111 09:10:01.405646  795222 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0111 09:10:01.405672  795222 machine.go:97] duration metric: took 4.692231642s to provisionDockerMachine
	I0111 09:10:01.405683  795222 client.go:176] duration metric: took 10.634664793s to LocalClient.Create
	I0111 09:10:01.405697  795222 start.go:167] duration metric: took 10.634725807s to libmachine.API.Create "newest-cni-193049"
	I0111 09:10:01.405704  795222 start.go:293] postStartSetup for "newest-cni-193049" (driver="docker")
	I0111 09:10:01.405715  795222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 09:10:01.405796  795222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 09:10:01.405840  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.426905  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:01.534248  795222 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 09:10:01.537806  795222 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 09:10:01.537885  795222 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 09:10:01.537911  795222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/addons for local assets ...
	I0111 09:10:01.537990  795222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-575040/.minikube/files for local assets ...
	I0111 09:10:01.538082  795222 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem -> 5769072.pem in /etc/ssl/certs
	I0111 09:10:01.538222  795222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 09:10:01.545945  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:10:01.564737  795222 start.go:296] duration metric: took 159.01769ms for postStartSetup
	I0111 09:10:01.565192  795222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-193049
	I0111 09:10:01.582671  795222 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/config.json ...
	I0111 09:10:01.582975  795222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 09:10:01.583019  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.600343  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:01.707651  795222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 09:10:01.712912  795222 start.go:128] duration metric: took 10.945663035s to createHost
	I0111 09:10:01.712942  795222 start.go:83] releasing machines lock for "newest-cni-193049", held for 10.945803025s
	I0111 09:10:01.713014  795222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-193049
	I0111 09:10:01.729844  795222 ssh_runner.go:195] Run: cat /version.json
	I0111 09:10:01.729913  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.730245  795222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 09:10:01.730306  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:01.753698  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:01.767662  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:01.866237  795222 ssh_runner.go:195] Run: systemctl --version
	I0111 09:10:01.977020  795222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0111 09:10:02.023568  795222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 09:10:02.028714  795222 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 09:10:02.028800  795222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 09:10:02.060947  795222 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 09:10:02.060970  795222 start.go:496] detecting cgroup driver to use...
	I0111 09:10:02.061004  795222 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 09:10:02.061069  795222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 09:10:02.080802  795222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 09:10:02.094626  795222 docker.go:218] disabling cri-docker service (if available) ...
	I0111 09:10:02.094779  795222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 09:10:02.114330  795222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 09:10:02.134714  795222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 09:10:02.269714  795222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 09:10:02.400465  795222 docker.go:234] disabling docker service ...
	I0111 09:10:02.400558  795222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 09:10:02.423679  795222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 09:10:02.437461  795222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 09:10:02.567821  795222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 09:10:02.693686  795222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 09:10:02.707900  795222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 09:10:02.722033  795222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0111 09:10:02.722116  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.732266  795222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0111 09:10:02.732355  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.741676  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.751422  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.760766  795222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 09:10:02.769423  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.778601  795222 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.793171  795222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0111 09:10:02.802761  795222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 09:10:02.811220  795222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 09:10:02.819361  795222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:10:02.935914  795222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0111 09:10:03.115713  795222 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0111 09:10:03.115822  795222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0111 09:10:03.119903  795222 start.go:574] Will wait 60s for crictl version
	I0111 09:10:03.120047  795222 ssh_runner.go:195] Run: which crictl
	I0111 09:10:03.123662  795222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 09:10:03.151386  795222 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0111 09:10:03.151546  795222 ssh_runner.go:195] Run: crio --version
	I0111 09:10:03.184015  795222 ssh_runner.go:195] Run: crio --version
	I0111 09:10:03.216870  795222 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0111 09:10:03.219692  795222 cli_runner.go:164] Run: docker network inspect newest-cni-193049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 09:10:03.237170  795222 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 09:10:03.241562  795222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:10:03.255686  795222 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0111 09:10:03.258549  795222 kubeadm.go:884] updating cluster {Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 09:10:03.258720  795222 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:10:03.258808  795222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:10:03.307172  795222 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:10:03.307199  795222 crio.go:433] Images already preloaded, skipping extraction
	I0111 09:10:03.307262  795222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 09:10:03.333822  795222 crio.go:561] all images are preloaded for cri-o runtime.
	I0111 09:10:03.333850  795222 cache_images.go:86] Images are preloaded, skipping loading
	I0111 09:10:03.333859  795222 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0111 09:10:03.333942  795222 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-193049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 09:10:03.334031  795222 ssh_runner.go:195] Run: crio config
	I0111 09:10:03.406378  795222 cni.go:84] Creating CNI manager for ""
	I0111 09:10:03.406406  795222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:10:03.406427  795222 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0111 09:10:03.406453  795222 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-193049 NodeName:newest-cni-193049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 09:10:03.406603  795222 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-193049"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 09:10:03.406683  795222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 09:10:03.414920  795222 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 09:10:03.415008  795222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 09:10:03.423080  795222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0111 09:10:03.440348  795222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 09:10:03.457904  795222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0111 09:10:03.474671  795222 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 09:10:03.478617  795222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 09:10:03.488841  795222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:10:03.605985  795222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:10:03.624569  795222 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049 for IP: 192.168.85.2
	I0111 09:10:03.624593  795222 certs.go:195] generating shared ca certs ...
	I0111 09:10:03.624609  795222 certs.go:227] acquiring lock for ca certs: {Name:mk1f12ba12935a8e77585174ab71b380b87aaa85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.624751  795222 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key
	I0111 09:10:03.624800  795222 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key
	I0111 09:10:03.624810  795222 certs.go:257] generating profile certs ...
	I0111 09:10:03.624863  795222 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.key
	I0111 09:10:03.624905  795222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.crt with IP's: []
	I0111 09:10:03.719493  795222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.crt ...
	I0111 09:10:03.719527  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.crt: {Name:mk337c4d1ac253622d62a845d0c98d56efc55a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.719738  795222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.key ...
	I0111 09:10:03.719753  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/client.key: {Name:mka494a3b746d2c5b74df371fe6fcf9db4133d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.719855  795222 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key.452904eb
	I0111 09:10:03.719874  795222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt.452904eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 09:10:03.832895  795222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt.452904eb ...
	I0111 09:10:03.832925  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt.452904eb: {Name:mkeea2df728596d775e3b25db2cc5a9d45ceec4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.833109  795222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key.452904eb ...
	I0111 09:10:03.833123  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key.452904eb: {Name:mk2e52d7929d51b03bb2a19c571839aff9b24ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:03.833222  795222 certs.go:382] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt.452904eb -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt
	I0111 09:10:03.833303  795222 certs.go:386] copying /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key.452904eb -> /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key
	I0111 09:10:03.833368  795222 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.key
	I0111 09:10:03.833386  795222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.crt with IP's: []
	I0111 09:10:04.193953  795222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.crt ...
	I0111 09:10:04.193985  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.crt: {Name:mkb856bd4da9b67fe469d2e739f585ce3b0d4637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:04.194191  795222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.key ...
	I0111 09:10:04.194208  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.key: {Name:mkcea439c07934e1b9dd6c99b55d0b52c8d7c9c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:04.194404  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem (1338 bytes)
	W0111 09:10:04.194452  795222 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907_empty.pem, impossibly tiny 0 bytes
	I0111 09:10:04.194467  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 09:10:04.194497  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/ca.pem (1078 bytes)
	I0111 09:10:04.194526  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/cert.pem (1123 bytes)
	I0111 09:10:04.194554  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/certs/key.pem (1675 bytes)
	I0111 09:10:04.194611  795222 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem (1708 bytes)
	I0111 09:10:04.195183  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 09:10:04.216174  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0111 09:10:04.237551  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 09:10:04.257045  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 09:10:04.277618  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0111 09:10:04.300447  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 09:10:04.320876  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 09:10:04.339781  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 09:10:04.362890  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/ssl/certs/5769072.pem --> /usr/share/ca-certificates/5769072.pem (1708 bytes)
	I0111 09:10:04.382787  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 09:10:04.405970  795222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-575040/.minikube/certs/576907.pem --> /usr/share/ca-certificates/576907.pem (1338 bytes)
	I0111 09:10:04.427126  795222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 09:10:04.446103  795222 ssh_runner.go:195] Run: openssl version
	I0111 09:10:04.456173  795222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5769072.pem
	I0111 09:10:04.466714  795222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5769072.pem /etc/ssl/certs/5769072.pem
	I0111 09:10:04.478985  795222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5769072.pem
	I0111 09:10:04.486907  795222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 08:20 /usr/share/ca-certificates/5769072.pem
	I0111 09:10:04.487018  795222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5769072.pem
	I0111 09:10:04.541304  795222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 09:10:04.559931  795222 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5769072.pem /etc/ssl/certs/3ec20f2e.0
	I0111 09:10:04.578304  795222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:10:04.601399  795222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 09:10:04.615558  795222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:10:04.620491  795222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 08:14 /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:10:04.620611  795222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 09:10:04.664387  795222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 09:10:04.673456  795222 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 09:10:04.682025  795222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/576907.pem
	I0111 09:10:04.690526  795222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/576907.pem /etc/ssl/certs/576907.pem
	I0111 09:10:04.699366  795222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/576907.pem
	I0111 09:10:04.703656  795222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 08:20 /usr/share/ca-certificates/576907.pem
	I0111 09:10:04.703777  795222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/576907.pem
	I0111 09:10:04.746990  795222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 09:10:04.755151  795222 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/576907.pem /etc/ssl/certs/51391683.0
	I0111 09:10:04.763804  795222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 09:10:04.767857  795222 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 09:10:04.767955  795222 kubeadm.go:401] StartCluster: {Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:10:04.768046  795222 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0111 09:10:04.768114  795222 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 09:10:04.797268  795222 cri.go:96] found id: ""
	I0111 09:10:04.797390  795222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 09:10:04.805780  795222 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 09:10:04.815451  795222 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 09:10:04.815570  795222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 09:10:04.825100  795222 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 09:10:04.825174  795222 kubeadm.go:158] found existing configuration files:
	
	I0111 09:10:04.825258  795222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 09:10:04.833830  795222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 09:10:04.833898  795222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 09:10:04.841870  795222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 09:10:04.849815  795222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 09:10:04.849929  795222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 09:10:04.857501  795222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 09:10:04.865690  795222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 09:10:04.865809  795222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 09:10:04.873598  795222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 09:10:04.882058  795222 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 09:10:04.882230  795222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 09:10:04.891888  795222 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 09:10:04.930905  795222 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 09:10:04.931370  795222 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 09:10:05.022567  795222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 09:10:05.022651  795222 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 09:10:05.022693  795222 kubeadm.go:319] OS: Linux
	I0111 09:10:05.022745  795222 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 09:10:05.022797  795222 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 09:10:05.022846  795222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 09:10:05.022899  795222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 09:10:05.022952  795222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 09:10:05.023003  795222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 09:10:05.023053  795222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 09:10:05.023101  795222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 09:10:05.023147  795222 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 09:10:05.103311  795222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 09:10:05.103525  795222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 09:10:05.103661  795222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 09:10:05.111201  795222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 09:10:05.116856  795222 out.go:252]   - Generating certificates and keys ...
	I0111 09:10:05.117033  795222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 09:10:05.117139  795222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 09:10:05.200359  795222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W0111 09:10:02.891825  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:10:04.892529  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:10:06.892952  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:10:05.599265  795222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 09:10:05.909397  795222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 09:10:06.493467  795222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 09:10:06.596775  795222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 09:10:06.597174  795222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-193049] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:10:06.940726  795222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 09:10:06.941357  795222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-193049] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 09:10:07.346544  795222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 09:10:07.408308  795222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 09:10:07.605470  795222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 09:10:07.605786  795222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 09:10:07.735957  795222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 09:10:08.316430  795222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 09:10:08.623663  795222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 09:10:08.790346  795222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 09:10:09.147694  795222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 09:10:09.148470  795222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 09:10:09.151289  795222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 09:10:09.154962  795222 out.go:252]   - Booting up control plane ...
	I0111 09:10:09.155167  795222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 09:10:09.155311  795222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 09:10:09.155396  795222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 09:10:09.179130  795222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 09:10:09.179450  795222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 09:10:09.188601  795222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 09:10:09.188887  795222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 09:10:09.189083  795222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 09:10:09.332344  795222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 09:10:09.332465  795222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W0111 09:10:09.409495  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	W0111 09:10:11.899838  791650 pod_ready.go:104] pod "coredns-7d764666f9-2lh6p" is not "Ready", error: <nil>
	I0111 09:10:12.390550  791650 pod_ready.go:94] pod "coredns-7d764666f9-2lh6p" is "Ready"
	I0111 09:10:12.390575  791650 pod_ready.go:86] duration metric: took 37.505644311s for pod "coredns-7d764666f9-2lh6p" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.394080  791650 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.400687  791650 pod_ready.go:94] pod "etcd-default-k8s-diff-port-588333" is "Ready"
	I0111 09:10:12.400757  791650 pod_ready.go:86] duration metric: took 6.585397ms for pod "etcd-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.408037  791650 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.412678  791650 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-588333" is "Ready"
	I0111 09:10:12.412750  791650 pod_ready.go:86] duration metric: took 4.622475ms for pod "kube-apiserver-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.416338  791650 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.589548  791650 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-588333" is "Ready"
	I0111 09:10:12.589623  791650 pod_ready.go:86] duration metric: took 173.225634ms for pod "kube-controller-manager-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:12.789080  791650 pod_ready.go:83] waiting for pod "kube-proxy-g4x2l" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:13.188851  791650 pod_ready.go:94] pod "kube-proxy-g4x2l" is "Ready"
	I0111 09:10:13.188937  791650 pod_ready.go:86] duration metric: took 399.77626ms for pod "kube-proxy-g4x2l" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:13.389585  791650 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:13.789192  791650 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-588333" is "Ready"
	I0111 09:10:13.789223  791650 pod_ready.go:86] duration metric: took 399.610728ms for pod "kube-scheduler-default-k8s-diff-port-588333" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 09:10:13.789246  791650 pod_ready.go:40] duration metric: took 38.908633874s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 09:10:13.870342  791650 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:10:13.873316  791650 out.go:203] 
	W0111 09:10:13.876148  791650 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:10:13.879057  791650 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:10:13.882042  791650 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-588333" cluster and "default" namespace by default
	I0111 09:10:10.834872  795222 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50188301s
	I0111 09:10:10.834984  795222 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 09:10:10.835071  795222 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0111 09:10:10.835165  795222 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 09:10:10.835247  795222 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 09:10:12.845189  795222 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.010467502s
	I0111 09:10:14.468107  795222 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.633559739s
	I0111 09:10:16.335817  795222 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501278649s
	I0111 09:10:16.370551  795222 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 09:10:16.388634  795222 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 09:10:16.407903  795222 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 09:10:16.408104  795222 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-193049 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 09:10:16.422754  795222 kubeadm.go:319] [bootstrap-token] Using token: zs68fl.2gyixjjdurk170u7
	I0111 09:10:16.424976  795222 out.go:252]   - Configuring RBAC rules ...
	I0111 09:10:16.425106  795222 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 09:10:16.432198  795222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 09:10:16.441299  795222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 09:10:16.445968  795222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 09:10:16.453571  795222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 09:10:16.459213  795222 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 09:10:16.744165  795222 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 09:10:17.208563  795222 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 09:10:17.743160  795222 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 09:10:17.744717  795222 kubeadm.go:319] 
	I0111 09:10:17.744813  795222 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 09:10:17.744829  795222 kubeadm.go:319] 
	I0111 09:10:17.744924  795222 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 09:10:17.744937  795222 kubeadm.go:319] 
	I0111 09:10:17.744970  795222 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 09:10:17.745041  795222 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 09:10:17.745107  795222 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 09:10:17.745117  795222 kubeadm.go:319] 
	I0111 09:10:17.745188  795222 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 09:10:17.745199  795222 kubeadm.go:319] 
	I0111 09:10:17.745283  795222 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 09:10:17.745292  795222 kubeadm.go:319] 
	I0111 09:10:17.745379  795222 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 09:10:17.745506  795222 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 09:10:17.745610  795222 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 09:10:17.745648  795222 kubeadm.go:319] 
	I0111 09:10:17.745765  795222 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 09:10:17.745899  795222 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 09:10:17.745912  795222 kubeadm.go:319] 
	I0111 09:10:17.746156  795222 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zs68fl.2gyixjjdurk170u7 \
	I0111 09:10:17.746280  795222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb \
	I0111 09:10:17.746302  795222 kubeadm.go:319] 	--control-plane 
	I0111 09:10:17.746306  795222 kubeadm.go:319] 
	I0111 09:10:17.746409  795222 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 09:10:17.746413  795222 kubeadm.go:319] 
	I0111 09:10:17.746508  795222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zs68fl.2gyixjjdurk170u7 \
	I0111 09:10:17.746644  795222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:dadc6d67a47af54d2945c6c16a1b243b0393e65acd660df9bab1ddf77078f1eb 
	I0111 09:10:17.750699  795222 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 09:10:17.751118  795222 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 09:10:17.751234  795222 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 09:10:17.751257  795222 cni.go:84] Creating CNI manager for ""
	I0111 09:10:17.751264  795222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:10:17.756229  795222 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 09:10:17.759172  795222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 09:10:17.763280  795222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 09:10:17.763302  795222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 09:10:17.777152  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 09:10:18.079462  795222 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 09:10:18.079583  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:18.079603  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-193049 minikube.k8s.io/updated_at=2026_01_11T09_10_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=newest-cni-193049 minikube.k8s.io/primary=true
	I0111 09:10:18.240335  795222 ops.go:34] apiserver oom_adj: -16
	I0111 09:10:18.240428  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:18.741186  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:19.240958  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:19.741335  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:20.241479  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:20.740942  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:21.241263  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:21.740619  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:22.240839  795222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 09:10:22.397596  795222 kubeadm.go:1114] duration metric: took 4.318072401s to wait for elevateKubeSystemPrivileges
	I0111 09:10:22.397625  795222 kubeadm.go:403] duration metric: took 17.629673831s to StartCluster
	I0111 09:10:22.397642  795222 settings.go:142] acquiring lock: {Name:mk6abd3345b4dadc44778666ff5cf67e8185cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:22.397703  795222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:10:22.398632  795222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/kubeconfig: {Name:mk35142bcc246507a5c48f4d47f59edb4002db5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:22.398841  795222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:10:22.398922  795222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 09:10:22.399158  795222 config.go:182] Loaded profile config "newest-cni-193049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:10:22.399195  795222 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 09:10:22.399250  795222 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-193049"
	I0111 09:10:22.399264  795222 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-193049"
	I0111 09:10:22.399285  795222 host.go:66] Checking if "newest-cni-193049" exists ...
	I0111 09:10:22.399786  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:10:22.400741  795222 addons.go:70] Setting default-storageclass=true in profile "newest-cni-193049"
	I0111 09:10:22.400765  795222 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-193049"
	I0111 09:10:22.401071  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:10:22.404322  795222 out.go:179] * Verifying Kubernetes components...
	I0111 09:10:22.408039  795222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 09:10:22.435055  795222 addons.go:239] Setting addon default-storageclass=true in "newest-cni-193049"
	I0111 09:10:22.435097  795222 host.go:66] Checking if "newest-cni-193049" exists ...
	I0111 09:10:22.435520  795222 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:10:22.456262  795222 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 09:10:22.462307  795222 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:10:22.462334  795222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 09:10:22.462402  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:22.480793  795222 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 09:10:22.480815  795222 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 09:10:22.480878  795222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:22.506248  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:22.527400  795222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:22.777664  795222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 09:10:22.777829  795222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 09:10:22.847488  795222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 09:10:22.853408  795222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 09:10:23.194302  795222 api_server.go:52] waiting for apiserver process to appear ...
	I0111 09:10:23.194410  795222 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0111 09:10:23.196073  795222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 09:10:23.599261  795222 api_server.go:72] duration metric: took 1.200394024s to wait for apiserver process to appear ...
	I0111 09:10:23.599341  795222 api_server.go:88] waiting for apiserver healthz status ...
	I0111 09:10:23.599372  795222 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0111 09:10:23.614982  795222 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0111 09:10:23.617043  795222 api_server.go:141] control plane version: v1.35.0
	I0111 09:10:23.617066  795222 api_server.go:131] duration metric: took 17.704465ms to wait for apiserver health ...
	I0111 09:10:23.617075  795222 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 09:10:23.625419  795222 system_pods.go:59] 8 kube-system pods found
	I0111 09:10:23.625507  795222 system_pods.go:61] "coredns-7d764666f9-4qsbm" [8662ede8-99ed-41d7-a141-89503f63b4e0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 09:10:23.625531  795222 system_pods.go:61] "etcd-newest-cni-193049" [a4912791-1140-4aa0-945b-575738a94e8f] Running
	I0111 09:10:23.625578  795222 system_pods.go:61] "kindnet-nnd7m" [5dc3259e-2cc0-400d-b23f-8e9c3620cf32] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0111 09:10:23.625611  795222 system_pods.go:61] "kube-apiserver-newest-cni-193049" [46ff78e7-d56a-4b2c-8f53-9ee776ca8da3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0111 09:10:23.625659  795222 system_pods.go:61] "kube-controller-manager-newest-cni-193049" [48de95e0-e1e4-4ae4-93b3-5ddd0bab2034] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0111 09:10:23.625687  795222 system_pods.go:61] "kube-proxy-nvrgg" [e7eff21d-1b08-4787-ae22-091ae53fe50c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0111 09:10:23.625707  795222 system_pods.go:61] "kube-scheduler-newest-cni-193049" [6e9e362a-8cdc-49d5-95e7-984ebf01ce4b] Running
	I0111 09:10:23.625744  795222 system_pods.go:61] "storage-provisioner" [de2a52c5-86cc-4d8a-a725-505c47a2e932] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0111 09:10:23.625770  795222 system_pods.go:74] duration metric: took 8.687538ms to wait for pod list to return data ...
	I0111 09:10:23.625793  795222 default_sa.go:34] waiting for default service account to be created ...
	I0111 09:10:23.626910  795222 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0111 09:10:23.630022  795222 addons.go:530] duration metric: took 1.230823846s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0111 09:10:23.634071  795222 default_sa.go:45] found service account: "default"
	I0111 09:10:23.634171  795222 default_sa.go:55] duration metric: took 8.3435ms for default service account to be created ...
	I0111 09:10:23.634201  795222 kubeadm.go:587] duration metric: took 1.235335601s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 09:10:23.634247  795222 node_conditions.go:102] verifying NodePressure condition ...
	I0111 09:10:23.639980  795222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 09:10:23.640007  795222 node_conditions.go:123] node cpu capacity is 2
	I0111 09:10:23.640019  795222 node_conditions.go:105] duration metric: took 5.750252ms to run NodePressure ...
	I0111 09:10:23.640032  795222 start.go:242] waiting for startup goroutines ...
	I0111 09:10:23.698670  795222 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-193049" context rescaled to 1 replicas
	I0111 09:10:23.698751  795222 start.go:247] waiting for cluster config update ...
	I0111 09:10:23.698778  795222 start.go:256] writing updated cluster config ...
	I0111 09:10:23.699098  795222 ssh_runner.go:195] Run: rm -f paused
	I0111 09:10:23.803435  795222 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 09:10:23.806299  795222 out.go:203] 
	W0111 09:10:23.809541  795222 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 09:10:23.812741  795222 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 09:10:23.815869  795222 out.go:179] * Done! kubectl is now configured to use "newest-cni-193049" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 11 09:10:04 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:04.568203615Z" level=info msg="Created container c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6: kube-system/storage-provisioner/storage-provisioner" id=c3217831-d213-4db5-8a7d-4a4c424cdddb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:04 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:04.571349896Z" level=info msg="Starting container: c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6" id=8ff78661-e164-44ef-9228-9771f2513563 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:04 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:04.574668937Z" level=info msg="Started container" PID=1690 containerID=c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6 description=kube-system/storage-provisioner/storage-provisioner id=8ff78661-e164-44ef-9228-9771f2513563 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c17bba03f5854d0584f77b5a0f3e71ef2ba2593345b0490bf2b92abfa3869c8
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.248539279Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.248577754Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.270100426Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.270155549Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.325262808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.32544325Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.325540326Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.356306596Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.356692964Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.200062482Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=391a28c1-62c4-4e86-a8a3-c0fb43d76ed3 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.20525646Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=158cea12-fc95-43b0-89cc-cc14f568ca49 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.207835037Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld/dashboard-metrics-scraper" id=4936f8a7-00fb-4197-a938-1385577f7ea2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.207962866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.2212096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.221898364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.247699934Z" level=info msg="Created container ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld/dashboard-metrics-scraper" id=4936f8a7-00fb-4197-a938-1385577f7ea2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.250781402Z" level=info msg="Starting container: ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d" id=dfc88000-4896-4fa6-862d-03539eb87a6d name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.256786533Z" level=info msg="Started container" PID=1762 containerID=ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld/dashboard-metrics-scraper id=dfc88000-4896-4fa6-862d-03539eb87a6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=10293ed9ed54ca09066b5d5116d3452f56beccfa50b30dfb284247c3caddd9a9
	Jan 11 09:10:15 default-k8s-diff-port-588333 conmon[1760]: conmon ca5b9d5226493a4fa53c <ninfo>: container 1762 exited with status 1
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.509340316Z" level=info msg="Removing container: ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1" id=26680182-940a-413c-8c06-065f61045dd9 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.520200394Z" level=info msg="Error loading conmon cgroup of container ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1: cgroup deleted" id=26680182-940a-413c-8c06-065f61045dd9 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.523360139Z" level=info msg="Removed container ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld/dashboard-metrics-scraper" id=26680182-940a-413c-8c06-065f61045dd9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ca5b9d5226493       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   3                   10293ed9ed54c       dashboard-metrics-scraper-867fb5f87b-l5rld             kubernetes-dashboard
	c680db18b2931       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago      Running             storage-provisioner         2                   2c17bba03f585       storage-provisioner                                    kube-system
	c124cf621b7be       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   3f35c7d2664eb       kubernetes-dashboard-b84665fb8-72rrq                   kubernetes-dashboard
	0df8b9a4c407e       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           55 seconds ago      Running             coredns                     1                   c7cb706b8f588       coredns-7d764666f9-2lh6p                               kube-system
	aa393b14f1ac7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago      Running             busybox                     1                   a255bf2aeab92       busybox                                                default
	79ce94787b5bc       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           55 seconds ago      Running             kube-proxy                  1                   c8b57393e7207       kube-proxy-g4x2l                                       kube-system
	7c6183b6143a5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago      Exited              storage-provisioner         1                   2c17bba03f585       storage-provisioner                                    kube-system
	e4ae94f42bebf       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago      Running             kindnet-cni                 1                   9fe6a3befa242       kindnet-8pg22                                          kube-system
	e7c36bd895a08       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           59 seconds ago      Running             kube-scheduler              1                   46418d423e5a9       kube-scheduler-default-k8s-diff-port-588333            kube-system
	076f1fdf555e0       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           59 seconds ago      Running             kube-controller-manager     1                   35e3a2f2db7b2       kube-controller-manager-default-k8s-diff-port-588333   kube-system
	2ae07275c0ab7       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           59 seconds ago      Running             etcd                        1                   ff5e762aa0c3a       etcd-default-k8s-diff-port-588333                      kube-system
	6f627745c3daa       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           59 seconds ago      Running             kube-apiserver              1                   da2c84b363e82       kube-apiserver-default-k8s-diff-port-588333            kube-system
	
	
	==> coredns [0df8b9a4c407e9731f6c71eb4095846aa25699d52db9617bae6bddb3f9f569f8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53145 - 1584 "HINFO IN 2755546954928543403.1820082797059822654. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013996939s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-588333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-588333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=default-k8s-diff-port-588333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_08_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:08:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-588333
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:10:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:10:14 +0000   Sun, 11 Jan 2026 09:08:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:10:14 +0000   Sun, 11 Jan 2026 09:08:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:10:14 +0000   Sun, 11 Jan 2026 09:08:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:10:14 +0000   Sun, 11 Jan 2026 09:08:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-588333
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                3726b86b-01d8-43b3-a465-e0aaf1859904
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-2lh6p                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-default-k8s-diff-port-588333                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-8pg22                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-588333             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-588333    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-g4x2l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-588333             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-l5rld              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-72rrq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node default-k8s-diff-port-588333 event: Registered Node default-k8s-diff-port-588333 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node default-k8s-diff-port-588333 event: Registered Node default-k8s-diff-port-588333 in Controller
	
	
	==> dmesg <==
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	[Jan11 09:08] overlayfs: idmapped layers are currently not supported
	[ +12.491892] overlayfs: idmapped layers are currently not supported
	[Jan11 09:09] overlayfs: idmapped layers are currently not supported
	[Jan11 09:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2ae07275c0ab7a01e2063bff151242ed44a810e62093e55ae36786b9db6a2095] <==
	{"level":"info","ts":"2026-01-11T09:09:30.000198Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:09:30.000259Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:09:29.993475Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-11T09:09:29.993564Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T09:09:30.000428Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T09:09:30.000466Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T09:09:30.000554Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T09:09:30.254273Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:09:30.254335Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:09:30.254371Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T09:09:30.254383Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:09:30.254397Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:09:30.257022Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T09:09:30.257054Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:09:30.257071Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:09:30.257080Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T09:09:30.259843Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-588333 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:09:30.259963Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:09:30.260845Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:09:30.262289Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:09:30.279161Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-11T09:09:30.279948Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:09:30.280607Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:09:30.297137Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:09:30.297279Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:10:29 up  3:52,  0 user,  load average: 4.40, 2.57, 2.15
	Linux default-k8s-diff-port-588333 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e4ae94f42bebfc7b29b6ffa9b2d76e2ad73831ebbfa9b4f121acaf89c0718ec9] <==
	I0111 09:09:34.024461       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:09:34.041431       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 09:09:34.041571       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:09:34.041584       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:09:34.041600       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:09:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:09:34.241377       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:09:34.241394       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:09:34.241408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:09:34.241724       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0111 09:10:04.242375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0111 09:10:04.242529       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0111 09:10:04.242638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0111 09:10:04.242672       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0111 09:10:05.842113       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:10:05.842177       1 metrics.go:72] Registering metrics
	I0111 09:10:05.842287       1 controller.go:711] "Syncing nftables rules"
	I0111 09:10:14.241639       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 09:10:14.241701       1 main.go:301] handling current node
	I0111 09:10:24.246382       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 09:10:24.246541       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f627745c3daad695d0b29049d2cbdb0651dcdbf59d1dfadfe4715bf0735f857] <==
	I0111 09:09:33.262008       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:33.262031       1 policy_source.go:248] refreshing policies
	I0111 09:09:33.280307       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:09:33.280537       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 09:09:33.280598       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 09:09:33.280650       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:33.280686       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0111 09:09:33.280694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 09:09:33.280773       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 09:09:33.281253       1 aggregator.go:187] initial CRD sync complete...
	I0111 09:09:33.281276       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 09:09:33.281282       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 09:09:33.281288       1 cache.go:39] Caches are synced for autoregister controller
	E0111 09:09:33.319964       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 09:09:33.356539       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:09:33.833746       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:09:34.301056       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:09:34.458778       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:09:34.515411       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:09:34.528710       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:09:34.612377       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.40.75"}
	I0111 09:09:34.649052       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.199.50"}
	I0111 09:09:36.819622       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:09:36.922533       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 09:09:37.048468       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [076f1fdf555e06139c3c03315dc96b76587a0287090e0f05c5db8be14ea7a439] <==
	I0111 09:09:36.327242       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.327343       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.324763       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.327917       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.327971       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328182       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328233       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328268       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328336       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328525       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328795       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.329258       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.324651       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.324755       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.324771       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.326250       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.327923       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 09:09:36.330247       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-588333"
	I0111 09:09:36.330338       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0111 09:09:36.341953       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:09:36.362400       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.425643       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.425668       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:09:36.425674       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:09:36.443306       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [79ce94787b5bc8ba39984fee7fa881de863dc7f27491e0c59f25f1604967629f] <==
	I0111 09:09:34.062642       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:09:34.223804       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:09:34.324215       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:34.324275       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 09:09:34.324385       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:09:34.464341       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:09:34.464398       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:09:34.468330       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:09:34.468609       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:09:34.468634       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:09:34.470021       1 config.go:200] "Starting service config controller"
	I0111 09:09:34.470041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:09:34.471977       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:09:34.471991       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:09:34.472757       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:09:34.476578       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:09:34.476642       1 config.go:309] "Starting node config controller"
	I0111 09:09:34.476647       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:09:34.476654       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:09:34.571992       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 09:09:34.573181       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:09:34.577309       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e7c36bd895a088cee8bf01ae0ac34e5e8eb26713282675fc6d4788401b926477] <==
	I0111 09:09:31.467610       1 serving.go:386] Generated self-signed cert in-memory
	W0111 09:09:33.109727       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 09:09:33.109756       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:09:33.109765       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 09:09:33.109772       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 09:09:33.256284       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 09:09:33.256312       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:09:33.264497       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 09:09:33.264613       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:09:33.264624       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:09:33.264638       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 09:09:33.366575       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:09:52 default-k8s-diff-port-588333 kubelet[790]: E0111 09:09:52.432078     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-72rrq" containerName="kubernetes-dashboard"
	Jan 11 09:09:53 default-k8s-diff-port-588333 kubelet[790]: E0111 09:09:53.755934     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:53 default-k8s-diff-port-588333 kubelet[790]: I0111 09:09:53.755975     790 scope.go:122] "RemoveContainer" containerID="8d8a6738c33c6989ac46cb6eb815271beb8110680ca0dd030748394d1c81a86f"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: I0111 09:09:54.440615     790 scope.go:122] "RemoveContainer" containerID="8d8a6738c33c6989ac46cb6eb815271beb8110680ca0dd030748394d1c81a86f"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: E0111 09:09:54.440935     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: I0111 09:09:54.440964     790 scope.go:122] "RemoveContainer" containerID="ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: E0111 09:09:54.441152     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-l5rld_kubernetes-dashboard(e475b492-919c-4a78-97a7-24ab93acf554)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" podUID="e475b492-919c-4a78-97a7-24ab93acf554"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: I0111 09:09:54.468233     790 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-72rrq" podStartSLOduration=5.395011981 podStartE2EDuration="18.467349148s" podCreationTimestamp="2026-01-11 09:09:36 +0000 UTC" firstStartedPulling="2026-01-11 09:09:37.392093384 +0000 UTC m=+8.392736706" lastFinishedPulling="2026-01-11 09:09:50.464430551 +0000 UTC m=+21.465073873" observedRunningTime="2026-01-11 09:09:51.457022732 +0000 UTC m=+22.457666054" watchObservedRunningTime="2026-01-11 09:09:54.467349148 +0000 UTC m=+25.467992478"
	Jan 11 09:10:03 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:03.756203     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:10:03 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:03.756966     790 scope.go:122] "RemoveContainer" containerID="ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1"
	Jan 11 09:10:03 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:03.757679     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-l5rld_kubernetes-dashboard(e475b492-919c-4a78-97a7-24ab93acf554)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" podUID="e475b492-919c-4a78-97a7-24ab93acf554"
	Jan 11 09:10:04 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:04.474650     790 scope.go:122] "RemoveContainer" containerID="7c6183b6143a56c781eb96c23625b670fed80c64491110815f034bceab591fa0"
	Jan 11 09:10:11 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:11.884071     790 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2lh6p" containerName="coredns"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:15.199487     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:15.199524     790 scope.go:122] "RemoveContainer" containerID="ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:15.505748     790 scope.go:122] "RemoveContainer" containerID="ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:15.506231     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:15.506268     790 scope.go:122] "RemoveContainer" containerID="ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:15.506504     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-l5rld_kubernetes-dashboard(e475b492-919c-4a78-97a7-24ab93acf554)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" podUID="e475b492-919c-4a78-97a7-24ab93acf554"
	Jan 11 09:10:23 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:23.756138     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:10:23 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:23.756190     790 scope.go:122] "RemoveContainer" containerID="ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d"
	Jan 11 09:10:23 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:23.756359     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-l5rld_kubernetes-dashboard(e475b492-919c-4a78-97a7-24ab93acf554)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" podUID="e475b492-919c-4a78-97a7-24ab93acf554"
	Jan 11 09:10:26 default-k8s-diff-port-588333 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:10:26 default-k8s-diff-port-588333 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:10:26 default-k8s-diff-port-588333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c124cf621b7bef383c51a20213d6f99c43490bc9138b2c57b8e874f868d88edf] <==
	2026/01/11 09:09:50 Using namespace: kubernetes-dashboard
	2026/01/11 09:09:50 Using in-cluster config to connect to apiserver
	2026/01/11 09:09:50 Using secret token for csrf signing
	2026/01/11 09:09:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 09:09:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 09:09:50 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 09:09:50 Generating JWE encryption key
	2026/01/11 09:09:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 09:09:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 09:09:51 Initializing JWE encryption key from synchronized object
	2026/01/11 09:09:51 Creating in-cluster Sidecar client
	2026/01/11 09:09:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:09:51 Serving insecurely on HTTP port: 9090
	2026/01/11 09:10:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:09:50 Starting overwatch
	
	
	==> storage-provisioner [7c6183b6143a56c781eb96c23625b670fed80c64491110815f034bceab591fa0] <==
	I0111 09:09:34.009457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 09:10:04.020543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6] <==
	I0111 09:10:04.594796       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0111 09:10:04.612388       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:10:04.612606       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 09:10:04.616155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:08.072179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:12.332678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:15.931265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:18.985443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:22.008082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:22.013991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:10:22.014305       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:10:22.014525       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588333_0537042a-3e53-4d1e-9ca1-ead1dd5cac54!
	I0111 09:10:22.015572       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ddfb1208-c7f0-4849-a965-1b5d359cfb5d", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-588333_0537042a-3e53-4d1e-9ca1-ead1dd5cac54 became leader
	W0111 09:10:22.022069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:22.034981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:10:22.115291       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588333_0537042a-3e53-4d1e-9ca1-ead1dd5cac54!
	W0111 09:10:24.039192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:24.044455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:26.049745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:26.058523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:28.061411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:28.077575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:30.098495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:30.114754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333: exit status 2 (419.177977ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-588333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-588333
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-588333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f",
	        "Created": "2026-01-11T09:08:13.612670128Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 791778,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:09:22.245236941Z",
	            "FinishedAt": "2026-01-11T09:09:21.433774853Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/hosts",
	        "LogPath": "/var/lib/docker/containers/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f/ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f-json.log",
	        "Name": "/default-k8s-diff-port-588333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-588333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-588333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed12141416565f3089133f16af593e9375563d369f753e828a953981f36a487f",
	                "LowerDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ed5c49c670be7eacdb8eab8b674e3763ca92e5df45679f0d330c538754b227a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-588333",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-588333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-588333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-588333",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-588333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13fa7766e1242e379caf21679910d0f63459c71a65b68c919e160f50c50269c0",
	            "SandboxKey": "/var/run/docker/netns/13fa7766e124",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-588333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:34:ad:f5:b5:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa19db219143297e6d2133400cad3ab3e7355f9d99472fad6a65d0a14f403a70",
	                    "EndpointID": "9ce16b287b16ca9935825217865e149bf93c6fdf3eac19473218733c680ebd25",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-588333",
	                        "ed1214141656"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333: exit status 2 (341.657655ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-588333 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-588333 logs -n 25: (1.272695552s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-236664                                                                                                                                                                                                                          │ no-preload-236664            │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:07 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:07 UTC │ 11 Jan 26 09:08 UTC │
	│ ssh     │ force-systemd-flag-630015 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p force-systemd-flag-630015                                                                                                                                                                                                                  │ force-systemd-flag-630015    │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ delete  │ -p disable-driver-mounts-781777                                                                                                                                                                                                               │ disable-driver-mounts-781777 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-630626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │                     │
	│ stop    │ -p embed-certs-630626 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-630626 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:08 UTC │
	│ start   │ -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:08 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-588333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-588333 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-588333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:10 UTC │
	│ image   │ embed-certs-630626 image list --format=json                                                                                                                                                                                                   │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ pause   │ -p embed-certs-630626 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ delete  │ -p embed-certs-630626                                                                                                                                                                                                                         │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ delete  │ -p embed-certs-630626                                                                                                                                                                                                                         │ embed-certs-630626           │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:10 UTC │
	│ addons  │ enable metrics-server -p newest-cni-193049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ image   │ default-k8s-diff-port-588333 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ pause   │ -p default-k8s-diff-port-588333 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-588333 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ stop    │ -p newest-cni-193049 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ addons  │ enable dashboard -p newest-cni-193049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ start   │ -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-193049            │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:10:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:10:29.084648  798830 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:10:29.084840  798830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:29.084866  798830 out.go:374] Setting ErrFile to fd 2...
	I0111 09:10:29.084888  798830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:29.085174  798830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:10:29.085591  798830 out.go:368] Setting JSON to false
	I0111 09:10:29.086585  798830 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13979,"bootTime":1768108650,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:10:29.086690  798830 start.go:143] virtualization:  
	I0111 09:10:29.089708  798830 out.go:179] * [newest-cni-193049] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:10:29.096437  798830 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:10:29.098234  798830 notify.go:221] Checking for updates...
	I0111 09:10:29.102639  798830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:10:29.106970  798830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:10:29.109819  798830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:10:29.112664  798830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:10:29.115561  798830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:10:29.118985  798830 config.go:182] Loaded profile config "newest-cni-193049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:10:29.119656  798830 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:10:29.159677  798830 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:10:29.159786  798830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:10:29.249511  798830 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:10:29.239777517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:10:29.249620  798830 docker.go:319] overlay module found
	I0111 09:10:29.252701  798830 out.go:179] * Using the docker driver based on existing profile
	I0111 09:10:29.257701  798830 start.go:309] selected driver: docker
	I0111 09:10:29.257723  798830 start.go:928] validating driver "docker" against &{Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:10:29.257819  798830 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:10:29.258540  798830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:10:29.360597  798830 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:10:29.349981213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:10:29.360928  798830 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0111 09:10:29.360952  798830 cni.go:84] Creating CNI manager for ""
	I0111 09:10:29.361003  798830 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:10:29.361202  798830 start.go:353] cluster config:
	{Name:newest-cni-193049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-193049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 09:10:29.366285  798830 out.go:179] * Starting "newest-cni-193049" primary control-plane node in "newest-cni-193049" cluster
	I0111 09:10:29.369180  798830 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:10:29.372180  798830 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:10:29.375062  798830 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:10:29.375124  798830 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:10:29.375136  798830 cache.go:65] Caching tarball of preloaded images
	I0111 09:10:29.375223  798830 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:10:29.375233  798830 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 09:10:29.375348  798830 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/newest-cni-193049/config.json ...
	I0111 09:10:29.375557  798830 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:10:29.401418  798830 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:10:29.401438  798830 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:10:29.401453  798830 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:10:29.401484  798830 start.go:360] acquireMachinesLock for newest-cni-193049: {Name:mkf4b4913de610081a1f70a8057cb410a71fc0bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:10:29.401537  798830 start.go:364] duration metric: took 36.776µs to acquireMachinesLock for "newest-cni-193049"
	I0111 09:10:29.401563  798830 start.go:96] Skipping create...Using existing machine configuration
	I0111 09:10:29.401572  798830 fix.go:54] fixHost starting: 
	I0111 09:10:29.401866  798830 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:10:29.426807  798830 fix.go:112] recreateIfNeeded on newest-cni-193049: state=Stopped err=<nil>
	W0111 09:10:29.426835  798830 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Jan 11 09:10:04 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:04.568203615Z" level=info msg="Created container c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6: kube-system/storage-provisioner/storage-provisioner" id=c3217831-d213-4db5-8a7d-4a4c424cdddb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:04 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:04.571349896Z" level=info msg="Starting container: c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6" id=8ff78661-e164-44ef-9228-9771f2513563 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:04 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:04.574668937Z" level=info msg="Started container" PID=1690 containerID=c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6 description=kube-system/storage-provisioner/storage-provisioner id=8ff78661-e164-44ef-9228-9771f2513563 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c17bba03f5854d0584f77b5a0f3e71ef2ba2593345b0490bf2b92abfa3869c8
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.248539279Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.248577754Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.270100426Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.270155549Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.325262808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.32544325Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.325540326Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.356306596Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 11 09:10:14 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:14.356692964Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.200062482Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=391a28c1-62c4-4e86-a8a3-c0fb43d76ed3 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.20525646Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=158cea12-fc95-43b0-89cc-cc14f568ca49 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.207835037Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld/dashboard-metrics-scraper" id=4936f8a7-00fb-4197-a938-1385577f7ea2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.207962866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.2212096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.221898364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.247699934Z" level=info msg="Created container ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld/dashboard-metrics-scraper" id=4936f8a7-00fb-4197-a938-1385577f7ea2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.250781402Z" level=info msg="Starting container: ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d" id=dfc88000-4896-4fa6-862d-03539eb87a6d name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.256786533Z" level=info msg="Started container" PID=1762 containerID=ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld/dashboard-metrics-scraper id=dfc88000-4896-4fa6-862d-03539eb87a6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=10293ed9ed54ca09066b5d5116d3452f56beccfa50b30dfb284247c3caddd9a9
	Jan 11 09:10:15 default-k8s-diff-port-588333 conmon[1760]: conmon ca5b9d5226493a4fa53c <ninfo>: container 1762 exited with status 1
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.509340316Z" level=info msg="Removing container: ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1" id=26680182-940a-413c-8c06-065f61045dd9 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.520200394Z" level=info msg="Error loading conmon cgroup of container ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1: cgroup deleted" id=26680182-940a-413c-8c06-065f61045dd9 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 11 09:10:15 default-k8s-diff-port-588333 crio[661]: time="2026-01-11T09:10:15.523360139Z" level=info msg="Removed container ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld/dashboard-metrics-scraper" id=26680182-940a-413c-8c06-065f61045dd9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ca5b9d5226493       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago       Exited              dashboard-metrics-scraper   3                   10293ed9ed54c       dashboard-metrics-scraper-867fb5f87b-l5rld             kubernetes-dashboard
	c680db18b2931       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   2c17bba03f585       storage-provisioner                                    kube-system
	c124cf621b7be       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   3f35c7d2664eb       kubernetes-dashboard-b84665fb8-72rrq                   kubernetes-dashboard
	0df8b9a4c407e       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           58 seconds ago       Running             coredns                     1                   c7cb706b8f588       coredns-7d764666f9-2lh6p                               kube-system
	aa393b14f1ac7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   a255bf2aeab92       busybox                                                default
	79ce94787b5bc       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           58 seconds ago       Running             kube-proxy                  1                   c8b57393e7207       kube-proxy-g4x2l                                       kube-system
	7c6183b6143a5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   2c17bba03f585       storage-provisioner                                    kube-system
	e4ae94f42bebf       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           58 seconds ago       Running             kindnet-cni                 1                   9fe6a3befa242       kindnet-8pg22                                          kube-system
	e7c36bd895a08       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   46418d423e5a9       kube-scheduler-default-k8s-diff-port-588333            kube-system
	076f1fdf555e0       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   35e3a2f2db7b2       kube-controller-manager-default-k8s-diff-port-588333   kube-system
	2ae07275c0ab7       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   ff5e762aa0c3a       etcd-default-k8s-diff-port-588333                      kube-system
	6f627745c3daa       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   da2c84b363e82       kube-apiserver-default-k8s-diff-port-588333            kube-system
	
	
	==> coredns [0df8b9a4c407e9731f6c71eb4095846aa25699d52db9617bae6bddb3f9f569f8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53145 - 1584 "HINFO IN 2755546954928543403.1820082797059822654. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013996939s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-588333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-588333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=default-k8s-diff-port-588333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_08_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:08:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-588333
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:10:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:10:14 +0000   Sun, 11 Jan 2026 09:08:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:10:14 +0000   Sun, 11 Jan 2026 09:08:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:10:14 +0000   Sun, 11 Jan 2026 09:08:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 11 Jan 2026 09:10:14 +0000   Sun, 11 Jan 2026 09:08:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-588333
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                3726b86b-01d8-43b3-a465-e0aaf1859904
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-2lh6p                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-default-k8s-diff-port-588333                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-8pg22                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-588333             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-588333    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-g4x2l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-588333             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-l5rld              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-72rrq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node default-k8s-diff-port-588333 event: Registered Node default-k8s-diff-port-588333 in Controller
	  Normal  RegisteredNode  56s   node-controller  Node default-k8s-diff-port-588333 event: Registered Node default-k8s-diff-port-588333 in Controller
	
	
	==> dmesg <==
	[ +36.980292] overlayfs: idmapped layers are currently not supported
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	[Jan11 09:08] overlayfs: idmapped layers are currently not supported
	[ +12.491892] overlayfs: idmapped layers are currently not supported
	[Jan11 09:09] overlayfs: idmapped layers are currently not supported
	[Jan11 09:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2ae07275c0ab7a01e2063bff151242ed44a810e62093e55ae36786b9db6a2095] <==
	{"level":"info","ts":"2026-01-11T09:09:30.000198Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:09:30.000259Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:09:29.993475Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-11T09:09:29.993564Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T09:09:30.000428Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-11T09:09:30.000466Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T09:09:30.000554Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T09:09:30.254273Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:09:30.254335Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:09:30.254371Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-11T09:09:30.254383Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:09:30.254397Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:09:30.257022Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T09:09:30.257054Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:09:30.257071Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:09:30.257080Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-11T09:09:30.259843Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-588333 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:09:30.259963Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:09:30.260845Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:09:30.262289Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:09:30.279161Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-11T09:09:30.279948Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:09:30.280607Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:09:30.297137Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:09:30.297279Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:10:32 up  3:53,  0 user,  load average: 4.37, 2.59, 2.16
	Linux default-k8s-diff-port-588333 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e4ae94f42bebfc7b29b6ffa9b2d76e2ad73831ebbfa9b4f121acaf89c0718ec9] <==
	I0111 09:09:34.024461       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:09:34.041431       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0111 09:09:34.041571       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:09:34.041584       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:09:34.041600       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:09:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:09:34.241377       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:09:34.241394       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:09:34.241408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:09:34.241724       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0111 09:10:04.242375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0111 09:10:04.242529       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0111 09:10:04.242638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0111 09:10:04.242672       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0111 09:10:05.842113       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0111 09:10:05.842177       1 metrics.go:72] Registering metrics
	I0111 09:10:05.842287       1 controller.go:711] "Syncing nftables rules"
	I0111 09:10:14.241639       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 09:10:14.241701       1 main.go:301] handling current node
	I0111 09:10:24.246382       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0111 09:10:24.246541       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f627745c3daad695d0b29049d2cbdb0651dcdbf59d1dfadfe4715bf0735f857] <==
	I0111 09:09:33.262008       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:33.262031       1 policy_source.go:248] refreshing policies
	I0111 09:09:33.280307       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:09:33.280537       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0111 09:09:33.280598       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0111 09:09:33.280650       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:33.280686       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0111 09:09:33.280694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 09:09:33.280773       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 09:09:33.281253       1 aggregator.go:187] initial CRD sync complete...
	I0111 09:09:33.281276       1 autoregister_controller.go:144] Starting autoregister controller
	I0111 09:09:33.281282       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0111 09:09:33.281288       1 cache.go:39] Caches are synced for autoregister controller
	E0111 09:09:33.319964       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 09:09:33.356539       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:09:33.833746       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:09:34.301056       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:09:34.458778       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:09:34.515411       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:09:34.528710       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:09:34.612377       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.40.75"}
	I0111 09:09:34.649052       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.199.50"}
	I0111 09:09:36.819622       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:09:36.922533       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0111 09:09:37.048468       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [076f1fdf555e06139c3c03315dc96b76587a0287090e0f05c5db8be14ea7a439] <==
	I0111 09:09:36.327242       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.327343       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.324763       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.327917       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.327971       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328182       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328233       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328268       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328336       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328525       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.328795       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.329258       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.324651       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.324755       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.324771       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.326250       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.327923       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 09:09:36.330247       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-588333"
	I0111 09:09:36.330338       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0111 09:09:36.341953       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:09:36.362400       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.425643       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:36.425668       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:09:36.425674       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:09:36.443306       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [79ce94787b5bc8ba39984fee7fa881de863dc7f27491e0c59f25f1604967629f] <==
	I0111 09:09:34.062642       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:09:34.223804       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:09:34.324215       1 shared_informer.go:377] "Caches are synced"
	I0111 09:09:34.324275       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0111 09:09:34.324385       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:09:34.464341       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:09:34.464398       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:09:34.468330       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:09:34.468609       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:09:34.468634       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:09:34.470021       1 config.go:200] "Starting service config controller"
	I0111 09:09:34.470041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:09:34.471977       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:09:34.471991       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:09:34.472757       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:09:34.476578       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:09:34.476642       1 config.go:309] "Starting node config controller"
	I0111 09:09:34.476647       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:09:34.476654       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:09:34.571992       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0111 09:09:34.573181       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:09:34.577309       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e7c36bd895a088cee8bf01ae0ac34e5e8eb26713282675fc6d4788401b926477] <==
	I0111 09:09:31.467610       1 serving.go:386] Generated self-signed cert in-memory
	W0111 09:09:33.109727       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 09:09:33.109756       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:09:33.109765       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 09:09:33.109772       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 09:09:33.256284       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 09:09:33.256312       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:09:33.264497       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 09:09:33.264613       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:09:33.264624       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:09:33.264638       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 09:09:33.366575       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:09:52 default-k8s-diff-port-588333 kubelet[790]: E0111 09:09:52.432078     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-72rrq" containerName="kubernetes-dashboard"
	Jan 11 09:09:53 default-k8s-diff-port-588333 kubelet[790]: E0111 09:09:53.755934     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:53 default-k8s-diff-port-588333 kubelet[790]: I0111 09:09:53.755975     790 scope.go:122] "RemoveContainer" containerID="8d8a6738c33c6989ac46cb6eb815271beb8110680ca0dd030748394d1c81a86f"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: I0111 09:09:54.440615     790 scope.go:122] "RemoveContainer" containerID="8d8a6738c33c6989ac46cb6eb815271beb8110680ca0dd030748394d1c81a86f"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: E0111 09:09:54.440935     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: I0111 09:09:54.440964     790 scope.go:122] "RemoveContainer" containerID="ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: E0111 09:09:54.441152     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-l5rld_kubernetes-dashboard(e475b492-919c-4a78-97a7-24ab93acf554)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" podUID="e475b492-919c-4a78-97a7-24ab93acf554"
	Jan 11 09:09:54 default-k8s-diff-port-588333 kubelet[790]: I0111 09:09:54.468233     790 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-72rrq" podStartSLOduration=5.395011981 podStartE2EDuration="18.467349148s" podCreationTimestamp="2026-01-11 09:09:36 +0000 UTC" firstStartedPulling="2026-01-11 09:09:37.392093384 +0000 UTC m=+8.392736706" lastFinishedPulling="2026-01-11 09:09:50.464430551 +0000 UTC m=+21.465073873" observedRunningTime="2026-01-11 09:09:51.457022732 +0000 UTC m=+22.457666054" watchObservedRunningTime="2026-01-11 09:09:54.467349148 +0000 UTC m=+25.467992478"
	Jan 11 09:10:03 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:03.756203     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:10:03 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:03.756966     790 scope.go:122] "RemoveContainer" containerID="ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1"
	Jan 11 09:10:03 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:03.757679     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-l5rld_kubernetes-dashboard(e475b492-919c-4a78-97a7-24ab93acf554)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" podUID="e475b492-919c-4a78-97a7-24ab93acf554"
	Jan 11 09:10:04 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:04.474650     790 scope.go:122] "RemoveContainer" containerID="7c6183b6143a56c781eb96c23625b670fed80c64491110815f034bceab591fa0"
	Jan 11 09:10:11 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:11.884071     790 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-2lh6p" containerName="coredns"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:15.199487     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:15.199524     790 scope.go:122] "RemoveContainer" containerID="ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:15.505748     790 scope.go:122] "RemoveContainer" containerID="ac6f7077abe519de491142845ddb33133957caa4372930a3d1a4a31e4d1110e1"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:15.506231     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:15.506268     790 scope.go:122] "RemoveContainer" containerID="ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d"
	Jan 11 09:10:15 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:15.506504     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-l5rld_kubernetes-dashboard(e475b492-919c-4a78-97a7-24ab93acf554)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" podUID="e475b492-919c-4a78-97a7-24ab93acf554"
	Jan 11 09:10:23 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:23.756138     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" containerName="dashboard-metrics-scraper"
	Jan 11 09:10:23 default-k8s-diff-port-588333 kubelet[790]: I0111 09:10:23.756190     790 scope.go:122] "RemoveContainer" containerID="ca5b9d5226493a4fa53c956dd10d0882e339022c4a672ed5457db4cefd7bf85d"
	Jan 11 09:10:23 default-k8s-diff-port-588333 kubelet[790]: E0111 09:10:23.756359     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-l5rld_kubernetes-dashboard(e475b492-919c-4a78-97a7-24ab93acf554)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-l5rld" podUID="e475b492-919c-4a78-97a7-24ab93acf554"
	Jan 11 09:10:26 default-k8s-diff-port-588333 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:10:26 default-k8s-diff-port-588333 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:10:26 default-k8s-diff-port-588333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c124cf621b7bef383c51a20213d6f99c43490bc9138b2c57b8e874f868d88edf] <==
	2026/01/11 09:09:50 Starting overwatch
	2026/01/11 09:09:50 Using namespace: kubernetes-dashboard
	2026/01/11 09:09:50 Using in-cluster config to connect to apiserver
	2026/01/11 09:09:50 Using secret token for csrf signing
	2026/01/11 09:09:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/11 09:09:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/11 09:09:50 Successful initial request to the apiserver, version: v1.35.0
	2026/01/11 09:09:50 Generating JWE encryption key
	2026/01/11 09:09:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/11 09:09:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/11 09:09:51 Initializing JWE encryption key from synchronized object
	2026/01/11 09:09:51 Creating in-cluster Sidecar client
	2026/01/11 09:09:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/11 09:09:51 Serving insecurely on HTTP port: 9090
	2026/01/11 09:10:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7c6183b6143a56c781eb96c23625b670fed80c64491110815f034bceab591fa0] <==
	I0111 09:09:34.009457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0111 09:10:04.020543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c680db18b2931afbe872bdff4c678badd65f59d49da119c16efda3217f1834d6] <==
	I0111 09:10:04.612388       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0111 09:10:04.612606       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0111 09:10:04.616155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:08.072179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:12.332678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:15.931265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:18.985443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:22.008082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:22.013991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:10:22.014305       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0111 09:10:22.014525       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588333_0537042a-3e53-4d1e-9ca1-ead1dd5cac54!
	I0111 09:10:22.015572       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ddfb1208-c7f0-4849-a965-1b5d359cfb5d", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-588333_0537042a-3e53-4d1e-9ca1-ead1dd5cac54 became leader
	W0111 09:10:22.022069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:22.034981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0111 09:10:22.115291       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-588333_0537042a-3e53-4d1e-9ca1-ead1dd5cac54!
	W0111 09:10:24.039192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:24.044455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:26.049745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:26.058523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:28.061411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:28.077575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:30.098495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:30.114754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:32.120422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0111 09:10:32.124958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333: exit status 2 (405.25651ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-588333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-193049 --alsologtostderr -v=1
E0111 09:10:46.228282  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:10:46.233551  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:10:46.243820  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:10:46.264740  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:10:46.305013  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:10:46.385273  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:10:46.545686  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:10:46.866408  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-193049 --alsologtostderr -v=1: exit status 80 (2.235565281s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-193049 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 09:10:45.917436  801832 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:10:45.917666  801832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:45.917681  801832 out.go:374] Setting ErrFile to fd 2...
	I0111 09:10:45.917688  801832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:45.917967  801832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:10:45.918395  801832 out.go:368] Setting JSON to false
	I0111 09:10:45.918440  801832 mustload.go:66] Loading cluster: newest-cni-193049
	I0111 09:10:45.918982  801832 config.go:182] Loaded profile config "newest-cni-193049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:10:45.919756  801832 cli_runner.go:164] Run: docker container inspect newest-cni-193049 --format={{.State.Status}}
	I0111 09:10:45.962260  801832 host.go:66] Checking if "newest-cni-193049" exists ...
	I0111 09:10:45.962586  801832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:10:46.076114  801832 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 09:10:46.059097024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:10:46.076847  801832 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-193049 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0111 09:10:46.082395  801832 out.go:179] * Pausing node newest-cni-193049 ... 
	I0111 09:10:46.086500  801832 host.go:66] Checking if "newest-cni-193049" exists ...
	I0111 09:10:46.086869  801832 ssh_runner.go:195] Run: systemctl --version
	I0111 09:10:46.086928  801832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-193049
	I0111 09:10:46.109979  801832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/newest-cni-193049/id_rsa Username:docker}
	I0111 09:10:46.224876  801832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:10:46.252383  801832 pause.go:52] kubelet running: true
	I0111 09:10:46.252452  801832 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:10:46.569613  801832 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:10:46.569702  801832 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:10:46.643703  801832 cri.go:96] found id: "0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9"
	I0111 09:10:46.643724  801832 cri.go:96] found id: "40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0"
	I0111 09:10:46.643729  801832 cri.go:96] found id: "5eddea163824216d5ba9de164f946784ce9b2fc07f12c7275e2cbdcd8c651795"
	I0111 09:10:46.643732  801832 cri.go:96] found id: "ba42ff7aa98ef005557c1a5f9ca85205c342efbd7d41c0d11e093ac4234e2f9f"
	I0111 09:10:46.643735  801832 cri.go:96] found id: "077d5b5899b12e4a9bac7509c5d458e1b6a1cb11d82ff1a896def42983b440da"
	I0111 09:10:46.643739  801832 cri.go:96] found id: "fe1ddcabdd0feee6caa75b3c5ec70c2524136a48321b2673b7aba4f2c7858a22"
	I0111 09:10:46.643743  801832 cri.go:96] found id: ""
	I0111 09:10:46.643794  801832 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:10:46.656110  801832 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:10:46Z" level=error msg="open /run/runc: no such file or directory"
	I0111 09:10:46.841813  801832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:10:46.869764  801832 pause.go:52] kubelet running: false
	I0111 09:10:46.869828  801832 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:10:47.113268  801832 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:10:47.113357  801832 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:10:47.228063  801832 cri.go:96] found id: "0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9"
	I0111 09:10:47.228083  801832 cri.go:96] found id: "40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0"
	I0111 09:10:47.228089  801832 cri.go:96] found id: "5eddea163824216d5ba9de164f946784ce9b2fc07f12c7275e2cbdcd8c651795"
	I0111 09:10:47.228093  801832 cri.go:96] found id: "ba42ff7aa98ef005557c1a5f9ca85205c342efbd7d41c0d11e093ac4234e2f9f"
	I0111 09:10:47.228096  801832 cri.go:96] found id: "077d5b5899b12e4a9bac7509c5d458e1b6a1cb11d82ff1a896def42983b440da"
	I0111 09:10:47.228100  801832 cri.go:96] found id: "fe1ddcabdd0feee6caa75b3c5ec70c2524136a48321b2673b7aba4f2c7858a22"
	I0111 09:10:47.228103  801832 cri.go:96] found id: ""
	I0111 09:10:47.228166  801832 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:10:47.612647  801832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 09:10:47.626767  801832 pause.go:52] kubelet running: false
	I0111 09:10:47.626830  801832 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0111 09:10:47.798477  801832 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0111 09:10:47.798562  801832 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0111 09:10:47.890301  801832 cri.go:96] found id: "0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9"
	I0111 09:10:47.890318  801832 cri.go:96] found id: "40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0"
	I0111 09:10:47.890323  801832 cri.go:96] found id: "5eddea163824216d5ba9de164f946784ce9b2fc07f12c7275e2cbdcd8c651795"
	I0111 09:10:47.890326  801832 cri.go:96] found id: "ba42ff7aa98ef005557c1a5f9ca85205c342efbd7d41c0d11e093ac4234e2f9f"
	I0111 09:10:47.890329  801832 cri.go:96] found id: "077d5b5899b12e4a9bac7509c5d458e1b6a1cb11d82ff1a896def42983b440da"
	I0111 09:10:47.890333  801832 cri.go:96] found id: "fe1ddcabdd0feee6caa75b3c5ec70c2524136a48321b2673b7aba4f2c7858a22"
	I0111 09:10:47.890336  801832 cri.go:96] found id: ""
	I0111 09:10:47.890385  801832 ssh_runner.go:195] Run: sudo runc list -f json
	I0111 09:10:47.949662  801832 out.go:203] 
	W0111 09:10:47.981380  801832 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T09:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W0111 09:10:47.981411  801832 out.go:285] * 
	* 
	W0111 09:10:47.992005  801832 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 09:10:48.012478  801832 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-193049 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-193049
helpers_test.go:244: (dbg) docker inspect newest-cni-193049:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73",
	        "Created": "2026-01-11T09:09:55.930458937Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 799046,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:10:29.472993188Z",
	            "FinishedAt": "2026-01-11T09:10:28.390547643Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/hostname",
	        "HostsPath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/hosts",
	        "LogPath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73-json.log",
	        "Name": "/newest-cni-193049",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-193049:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-193049",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73",
	                "LowerDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-193049",
	                "Source": "/var/lib/docker/volumes/newest-cni-193049/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-193049",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-193049",
	                "name.minikube.sigs.k8s.io": "newest-cni-193049",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d59d2cee9835de515e527fcccaa599dcdc8f42f7a85fc0718b64eb34a909a8c",
	            "SandboxKey": "/var/run/docker/netns/1d59d2cee983",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-193049": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:81:3c:1e:fc:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74db70392a94307fb92c8a30f920a21debbaee70569c0d4609fca3634546fe0e",
	                    "EndpointID": "ad18bc7fdceadf7da37b686dbc3ea9bea60d887350cc438be99a281fb69eee19",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-193049",
	                        "40fddecbe5bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-193049 -n newest-cni-193049
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-193049 -n newest-cni-193049: exit status 2 (383.59365ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-193049 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-193049 logs -n 25: (1.472942116s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ -p multinode-869861 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                                           │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- nslookup kubernetes.io                                                                    │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- nslookup kubernetes.io                                                                    │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- nslookup kubernetes.default                                                               │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- nslookup kubernetes.default                                                               │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- nslookup kubernetes.default.svc.cluster.local                                             │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- nslookup kubernetes.default.svc.cluster.local                                             │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                                           │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3                       │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- sh -c ping -c 1 192.168.67.1                                                              │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3                       │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ kubectl │ -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- sh -c ping -c 1 192.168.67.1                                                              │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ node    │ add -p multinode-869861 -v=5 --alsologtostderr                                                                                                    │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:38 UTC │ 11 Jan 26 08:38 UTC │
	│ cp      │ multinode-869861 cp testdata/cp-test.txt multinode-869861:/home/docker/cp-test.txt                                                                │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ ssh     │ multinode-869861 ssh -n multinode-869861 sudo cat /home/docker/cp-test.txt                                                                        │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ cp      │ multinode-869861 cp multinode-869861:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2291904672/001/cp-test_multinode-869861.txt         │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ ssh     │ multinode-869861 ssh -n multinode-869861 sudo cat /home/docker/cp-test.txt                                                                        │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ cp      │ multinode-869861 cp multinode-869861:/home/docker/cp-test.txt multinode-869861-m02:/home/docker/cp-test_multinode-869861_multinode-869861-m02.txt │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ ssh     │ multinode-869861 ssh -n multinode-869861 sudo cat /home/docker/cp-test.txt                                                                        │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ ssh     │ multinode-869861 ssh -n multinode-869861-m02 sudo cat /home/docker/cp-test_multinode-869861_multinode-869861-m02.txt                              │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ cp      │ multinode-869861 cp multinode-869861:/home/docker/cp-test.txt multinode-869861-m03:/home/docker/cp-test_multinode-869861_multinode-869861-m03.txt │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ ssh     │ multinode-869861 ssh -n multinode-869861 sudo cat /home/docker/cp-test.txt                                                                        │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ ssh     │ multinode-869861 ssh -n multinode-869861-m03 sudo cat /home/docker/cp-test_multinode-869861_multinode-869861-m03.txt                              │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ cp      │ multinode-869861 cp testdata/cp-test.txt multinode-869861-m02:/home/docker/cp-test.txt                                                            │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	│ ssh     │ multinode-869861 ssh -n multinode-869861-m02 sudo cat /home/docker/cp-test.txt                                                                    │ multinode-869861 │ jenkins │ v1.37.0 │ 11 Jan 26 08:39 UTC │ 11 Jan 26 08:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:10:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:10:47.455015  802037 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:10:47.455190  802037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:47.455201  802037 out.go:374] Setting ErrFile to fd 2...
	I0111 09:10:47.455207  802037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:47.455493  802037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:10:47.455919  802037 out.go:368] Setting JSON to false
	I0111 09:10:47.456805  802037 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13997,"bootTime":1768108650,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:10:47.456879  802037 start.go:143] virtualization:  
	I0111 09:10:47.484203  802037 out.go:179] * [test-preload-dl-gcs-cached-560704] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:10:47.515025  802037 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:10:47.515145  802037 notify.go:221] Checking for updates...
	I0111 09:10:47.593645  802037 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:10:47.612392  802037 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:10:47.635061  802037 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:10:47.660225  802037 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:10:47.701435  802037 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:10:47.724496  802037 config.go:182] Loaded profile config "newest-cni-193049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:10:47.724605  802037 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:10:47.747310  802037 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:10:47.747441  802037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:10:47.853645  802037 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:10:47.840655476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:10:47.853751  802037 docker.go:319] overlay module found
	I0111 09:10:47.888843  802037 out.go:179] * Using the docker driver based on user configuration
	I0111 09:10:47.917252  802037 start.go:309] selected driver: docker
	I0111 09:10:47.917278  802037 start.go:928] validating driver "docker" against <nil>
	I0111 09:10:47.917393  802037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:10:47.973391  802037 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:10:47.963948324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:10:47.973544  802037 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 09:10:47.973810  802037 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0111 09:10:47.973968  802037 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 09:10:47.998540  802037 out.go:179] * Using Docker driver with root privileges
	I0111 09:10:48.044482  802037 cni.go:84] Creating CNI manager for ""
	I0111 09:10:48.044554  802037 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:10:48.044565  802037 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 09:10:48.044650  802037 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-560704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-560704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s
Rosetta:false}
	I0111 09:10:48.108836  802037 out.go:179] * Starting "test-preload-dl-gcs-cached-560704" primary control-plane node in "test-preload-dl-gcs-cached-560704" cluster
	I0111 09:10:48.153595  802037 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:10:48.197445  802037 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:10:48.220563  802037 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0111 09:10:48.220611  802037 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0111 09:10:48.220621  802037 cache.go:65] Caching tarball of preloaded images
	I0111 09:10:48.220712  802037 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:10:48.220721  802037 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I0111 09:10:48.220869  802037 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/test-preload-dl-gcs-cached-560704/config.json ...
	I0111 09:10:48.220895  802037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/test-preload-dl-gcs-cached-560704/config.json: {Name:mk5cf5537fff8f677d29e3667afd4ad0c1cb9c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:48.221051  802037 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0111 09:10:48.221087  802037 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:10:48.221114  802037 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/arm64/kubectl.sha256
	I0111 09:10:48.261987  802037 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:10:48.262005  802037 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 09:10:48.262085  802037 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory
	I0111 09:10:48.262102  802037 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory, skipping pull
	I0111 09:10:48.262106  802037 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in cache, skipping pull
	I0111 09:10:48.262113  802037 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 as a tarball
	I0111 09:10:48.262147  802037 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:10:48.298233  802037 out.go:179] * Download complete!
	
	
	==> CRI-O <==
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.205397457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.209727137Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b8b5acc4-edde-419b-8b08-59726ef59333 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.218477026Z" level=info msg="Ran pod sandbox 1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d with infra container: kube-system/kindnet-nnd7m/POD" id=b8b5acc4-edde-419b-8b08-59726ef59333 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.220147004Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=ecd4713b-23fd-4b69-889c-041c72d6b739 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.223718685Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=7288853a-c1d9-46d4-96db-bb0b4f62b2be name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.225326821Z" level=info msg="Creating container: kube-system/kindnet-nnd7m/kindnet-cni" id=edfd8a7e-d52c-41ca-a0c6-a5f1e5eb1ee0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.225611198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.22838208Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-nvrgg/POD" id=8939bdb4-0383-4dd8-896a-aef974e76b19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.228555047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.246038996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.258895481Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8939bdb4-0383-4dd8-896a-aef974e76b19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.259377243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.271824829Z" level=info msg="Ran pod sandbox 2bf96d26d55bdc1cdef614575c01a9ab6ede487f2f755f245122069bafb322ba with infra container: kube-system/kube-proxy-nvrgg/POD" id=8939bdb4-0383-4dd8-896a-aef974e76b19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.273406954Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=908fb5fd-302d-4ba2-9ada-bcc997c6f68c name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.278855434Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=aa9e4858-ca0c-4a27-8608-66e2aae77a38 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.285214367Z" level=info msg="Creating container: kube-system/kube-proxy-nvrgg/kube-proxy" id=19f6528f-9200-4392-999f-4bd5df8b507c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.287641779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.334506126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.335696032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.385797812Z" level=info msg="Created container 40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0: kube-system/kindnet-nnd7m/kindnet-cni" id=edfd8a7e-d52c-41ca-a0c6-a5f1e5eb1ee0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.388603698Z" level=info msg="Starting container: 40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0" id=b11c9113-4c04-4a05-9680-352963c6241e name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.390923638Z" level=info msg="Started container" PID=1075 containerID=40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0 description=kube-system/kindnet-nnd7m/kindnet-cni id=b11c9113-4c04-4a05-9680-352963c6241e name=/runtime.v1.RuntimeService/StartContainer sandboxID=1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.473131536Z" level=info msg="Created container 0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9: kube-system/kube-proxy-nvrgg/kube-proxy" id=19f6528f-9200-4392-999f-4bd5df8b507c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.474646132Z" level=info msg="Starting container: 0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9" id=8aea4438-001b-46d2-9f70-040ef1aea639 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.483673604Z" level=info msg="Started container" PID=1079 containerID=0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9 description=kube-system/kube-proxy-nvrgg/kube-proxy id=8aea4438-001b-46d2-9f70-040ef1aea639 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bf96d26d55bdc1cdef614575c01a9ab6ede487f2f755f245122069bafb322ba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0c0a1d3861ec1       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   6 seconds ago       Running             kube-proxy                1                   2bf96d26d55bd       kube-proxy-nvrgg                            kube-system
	40ece3d708cfa       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   6 seconds ago       Running             kindnet-cni               1                   1ec30b9a3778c       kindnet-nnd7m                               kube-system
	5eddea1638242       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   11 seconds ago      Running             kube-apiserver            1                   60837a2ce89b0       kube-apiserver-newest-cni-193049            kube-system
	ba42ff7aa98ef       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   11 seconds ago      Running             kube-scheduler            1                   f6676d89cb732       kube-scheduler-newest-cni-193049            kube-system
	077d5b5899b12       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   11 seconds ago      Running             kube-controller-manager   1                   60448f4462a7c       kube-controller-manager-newest-cni-193049   kube-system
	fe1ddcabdd0fe       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   11 seconds ago      Running             etcd                      1                   610e3fab78beb       etcd-newest-cni-193049                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-193049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-193049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=newest-cni-193049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_10_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:10:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-193049
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:10:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:10:42 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:10:42 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:10:42 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 11 Jan 2026 09:10:42 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-193049
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                fd89335c-cfbd-4c1f-a796-6c2f717b69b5
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-193049                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-nnd7m                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-193049             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-193049    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-nvrgg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-193049             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node newest-cni-193049 event: Registered Node newest-cni-193049 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-193049 event: Registered Node newest-cni-193049 in Controller
	
	
	==> dmesg <==
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	[Jan11 09:08] overlayfs: idmapped layers are currently not supported
	[ +12.491892] overlayfs: idmapped layers are currently not supported
	[Jan11 09:09] overlayfs: idmapped layers are currently not supported
	[Jan11 09:10] overlayfs: idmapped layers are currently not supported
	[ +26.297928] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fe1ddcabdd0feee6caa75b3c5ec70c2524136a48321b2673b7aba4f2c7858a22] <==
	{"level":"info","ts":"2026-01-11T09:10:37.917970Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:10:37.917998Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:10:37.918052Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:10:37.918183Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:10:37.939143Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-11T09:10:37.957207Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T09:10:37.960765Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T09:10:38.134483Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:38.134550Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:38.134628Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:38.134668Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:10:38.134686Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:10:38.146280Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:10:38.146346Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:10:38.146369Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:10:38.146379Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:10:38.169954Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-193049 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:10:38.170120Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:10:38.170439Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:10:38.171337Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:10:38.173197Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T09:10:38.230857Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:10:38.231702Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:10:38.231924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:10:38.231967Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:10:49 up  3:53,  0 user,  load average: 4.18, 2.64, 2.19
	Linux newest-cni-193049 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0] <==
	I0111 09:10:43.552962       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:10:43.553488       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:10:43.559522       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:10:43.559554       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:10:43.559568       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:10:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:10:43.842785       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:10:43.842836       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:10:43.842848       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:10:43.843948       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [5eddea163824216d5ba9de164f946784ce9b2fc07f12c7275e2cbdcd8c651795] <==
	I0111 09:10:42.771349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0111 09:10:42.771357       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 09:10:42.771478       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:42.771486       1 policy_source.go:248] refreshing policies
	I0111 09:10:42.772561       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 09:10:42.772653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 09:10:42.794347       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:10:42.794624       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 09:10:42.796939       1 cache.go:39] Caches are synced for autoregister controller
	I0111 09:10:42.807989       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 09:10:42.815388       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0111 09:10:42.817385       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:10:42.840581       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E0111 09:10:42.910334       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 09:10:42.991111       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:10:43.262729       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:10:43.838497       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:10:43.977085       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:10:44.024859       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:10:44.060282       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:10:44.227887       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.8.206"}
	I0111 09:10:44.259659       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.254.185"}
	I0111 09:10:46.021167       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 09:10:46.223605       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:10:46.376808       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [077d5b5899b12e4a9bac7509c5d458e1b6a1cb11d82ff1a896def42983b440da] <==
	I0111 09:10:45.919715       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 09:10:45.919864       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920011       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920036       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920077       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920096       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920144       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920179       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920406       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920605       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920703       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920926       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.921920       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.923643       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.928023       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.928184       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.929552       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.929590       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:10:45.929601       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:10:45.929696       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.929722       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.929739       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.958118       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-193049"
	I0111 09:10:45.958209       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 09:10:45.974302       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9] <==
	I0111 09:10:43.768992       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:10:44.064238       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:10:44.266518       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:44.266578       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 09:10:44.266686       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:10:44.322941       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:10:44.322995       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:10:44.329811       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:10:44.330611       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:10:44.330628       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:10:44.334353       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:10:44.334370       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:10:44.334708       1 config.go:200] "Starting service config controller"
	I0111 09:10:44.334715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:10:44.335025       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:10:44.335032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:10:44.335433       1 config.go:309] "Starting node config controller"
	I0111 09:10:44.335441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:10:44.335451       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:10:44.435269       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 09:10:44.435317       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:10:44.435330       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ba42ff7aa98ef005557c1a5f9ca85205c342efbd7d41c0d11e093ac4234e2f9f] <==
	I0111 09:10:40.295455       1 serving.go:386] Generated self-signed cert in-memory
	W0111 09:10:42.590513       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 09:10:42.590823       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:10:42.590839       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 09:10:42.590846       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 09:10:42.752127       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 09:10:42.752240       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:10:42.774227       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:10:42.774278       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:10:42.778602       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 09:10:42.778996       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 09:10:42.878447       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.903942     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-193049" containerName="kube-apiserver"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.904292     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-193049" containerName="kube-scheduler"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.904546     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-193049" containerName="etcd"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.912042     737 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.912124     737 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.912487     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-193049\" already exists" pod="kube-system/kube-controller-manager-newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.912504     737 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.918856     737 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.919186     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-193049" containerName="kube-controller-manager"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.962478     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-193049\" already exists" pod="kube-system/kube-scheduler-newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.962521     737 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.969091     737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973089     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-cni-cfg\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973133     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7eff21d-1b08-4787-ae22-091ae53fe50c-xtables-lock\") pod \"kube-proxy-nvrgg\" (UID: \"e7eff21d-1b08-4787-ae22-091ae53fe50c\") " pod="kube-system/kube-proxy-nvrgg"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973153     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-xtables-lock\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973183     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7eff21d-1b08-4787-ae22-091ae53fe50c-lib-modules\") pod \"kube-proxy-nvrgg\" (UID: \"e7eff21d-1b08-4787-ae22-091ae53fe50c\") " pod="kube-system/kube-proxy-nvrgg"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973212     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-lib-modules\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: E0111 09:10:43.006888     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-193049\" already exists" pod="kube-system/etcd-newest-cni-193049"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: I0111 09:10:43.006927     737 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-193049"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: I0111 09:10:43.014755     737 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: E0111 09:10:43.075928     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-193049\" already exists" pod="kube-system/kube-apiserver-newest-cni-193049"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: W0111 09:10:43.216962     737 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/crio-1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d WatchSource:0}: Error finding container 1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d: Status 404 returned error can't find the container with id 1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d
	Jan 11 09:10:46 newest-cni-193049 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:10:46 newest-cni-193049 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:10:46 newest-cni-193049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-193049 -n newest-cni-193049
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-193049 -n newest-cni-193049: exit status 2 (440.015273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-193049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-4qsbm storage-provisioner dashboard-metrics-scraper-867fb5f87b-vcd88 kubernetes-dashboard-b84665fb8-v2l9s
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-193049 describe pod coredns-7d764666f9-4qsbm storage-provisioner dashboard-metrics-scraper-867fb5f87b-vcd88 kubernetes-dashboard-b84665fb8-v2l9s
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-193049 describe pod coredns-7d764666f9-4qsbm storage-provisioner dashboard-metrics-scraper-867fb5f87b-vcd88 kubernetes-dashboard-b84665fb8-v2l9s: exit status 1 (95.988933ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4qsbm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-vcd88" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-v2l9s" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-193049 describe pod coredns-7d764666f9-4qsbm storage-provisioner dashboard-metrics-scraper-867fb5f87b-vcd88 kubernetes-dashboard-b84665fb8-v2l9s: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-193049
helpers_test.go:244: (dbg) docker inspect newest-cni-193049:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73",
	        "Created": "2026-01-11T09:09:55.930458937Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 799046,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T09:10:29.472993188Z",
	            "FinishedAt": "2026-01-11T09:10:28.390547643Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/hostname",
	        "HostsPath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/hosts",
	        "LogPath": "/var/lib/docker/containers/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73-json.log",
	        "Name": "/newest-cni-193049",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-193049:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-193049",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73",
	                "LowerDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82-init/diff:/var/lib/docker/overlay2/90ff5a0736188557690a6e34a5751300397028793fcf5cb627b897ad13e47395/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e93912e4611e8bd9933c9c39d66f74ab93f6e85e31f80e743e12a76395e57e82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-193049",
	                "Source": "/var/lib/docker/volumes/newest-cni-193049/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-193049",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-193049",
	                "name.minikube.sigs.k8s.io": "newest-cni-193049",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d59d2cee9835de515e527fcccaa599dcdc8f42f7a85fc0718b64eb34a909a8c",
	            "SandboxKey": "/var/run/docker/netns/1d59d2cee983",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-193049": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:81:3c:1e:fc:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74db70392a94307fb92c8a30f920a21debbaee70569c0d4609fca3634546fe0e",
	                    "EndpointID": "ad18bc7fdceadf7da37b686dbc3ea9bea60d887350cc438be99a281fb69eee19",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-193049",
	                        "40fddecbe5bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-193049 -n newest-cni-193049
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-193049 -n newest-cni-193049: exit status 2 (440.561816ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-193049 logs -n 25
E0111 09:10:51.349185  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-193049 logs -n 25: (1.293088787s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p default-k8s-diff-port-588333 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-588333      │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-588333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-588333      │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-588333      │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:10 UTC │
	│ image   │ embed-certs-630626 image list --format=json                                                                                                                                                                                                   │ embed-certs-630626                │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ pause   │ -p embed-certs-630626 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-630626                │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │                     │
	│ delete  │ -p embed-certs-630626                                                                                                                                                                                                                         │ embed-certs-630626                │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ delete  │ -p embed-certs-630626                                                                                                                                                                                                                         │ embed-certs-630626                │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:09 UTC │
	│ start   │ -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-193049                 │ jenkins │ v1.37.0 │ 11 Jan 26 09:09 UTC │ 11 Jan 26 09:10 UTC │
	│ addons  │ enable metrics-server -p newest-cni-193049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-193049                 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ image   │ default-k8s-diff-port-588333 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-588333      │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ pause   │ -p default-k8s-diff-port-588333 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-588333      │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ stop    │ -p newest-cni-193049 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-193049                 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ addons  │ enable dashboard -p newest-cni-193049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-193049                 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ start   │ -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-193049                 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ delete  │ -p default-k8s-diff-port-588333                                                                                                                                                                                                               │ default-k8s-diff-port-588333      │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ delete  │ -p default-k8s-diff-port-588333                                                                                                                                                                                                               │ default-k8s-diff-port-588333      │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ start   │ -p test-preload-dl-gcs-064330 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-064330        │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-064330                                                                                                                                                                                                                 │ test-preload-dl-gcs-064330        │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ start   │ -p test-preload-dl-github-068348 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-068348     │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ image   │ newest-cni-193049 image list --format=json                                                                                                                                                                                                    │ newest-cni-193049                 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ pause   │ -p newest-cni-193049 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-193049                 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ delete  │ -p test-preload-dl-github-068348                                                                                                                                                                                                              │ test-preload-dl-github-068348     │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-560704 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-560704 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-560704                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-560704 │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │ 11 Jan 26 09:10 UTC │
	│ start   │ -p auto-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-293572                       │ jenkins │ v1.37.0 │ 11 Jan 26 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 09:10:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 09:10:48.653455  802302 out.go:360] Setting OutFile to fd 1 ...
	I0111 09:10:48.653564  802302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:48.653569  802302 out.go:374] Setting ErrFile to fd 2...
	I0111 09:10:48.653573  802302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 09:10:48.653831  802302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 09:10:48.654281  802302 out.go:368] Setting JSON to false
	I0111 09:10:48.655134  802302 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13999,"bootTime":1768108650,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 09:10:48.655202  802302 start.go:143] virtualization:  
	I0111 09:10:48.658676  802302 out.go:179] * [auto-293572] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 09:10:48.662652  802302 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 09:10:48.662875  802302 notify.go:221] Checking for updates...
	I0111 09:10:48.672253  802302 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 09:10:48.675468  802302 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 09:10:48.678550  802302 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 09:10:48.681552  802302 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 09:10:48.684542  802302 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 09:10:48.688017  802302 config.go:182] Loaded profile config "newest-cni-193049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 09:10:48.688108  802302 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 09:10:48.710468  802302 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 09:10:48.710586  802302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:10:48.815089  802302 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:10:48.805104869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:10:48.815218  802302 docker.go:319] overlay module found
	I0111 09:10:48.818535  802302 out.go:179] * Using the docker driver based on user configuration
	I0111 09:10:48.821411  802302 start.go:309] selected driver: docker
	I0111 09:10:48.821435  802302 start.go:928] validating driver "docker" against <nil>
	I0111 09:10:48.821450  802302 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 09:10:48.822233  802302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 09:10:48.900332  802302 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 09:10:48.889223422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 09:10:48.900492  802302 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 09:10:48.900705  802302 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 09:10:48.904312  802302 out.go:179] * Using Docker driver with root privileges
	I0111 09:10:48.907201  802302 cni.go:84] Creating CNI manager for ""
	I0111 09:10:48.907278  802302 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 09:10:48.907289  802302 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 09:10:48.907373  802302 start.go:353] cluster config:
	{Name:auto-293572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-293572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I0111 09:10:48.910870  802302 out.go:179] * Starting "auto-293572" primary control-plane node in "auto-293572" cluster
	I0111 09:10:48.913595  802302 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 09:10:48.916566  802302 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 09:10:48.919569  802302 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0111 09:10:48.919616  802302 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0111 09:10:48.919641  802302 cache.go:65] Caching tarball of preloaded images
	I0111 09:10:48.919734  802302 preload.go:251] Found /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0111 09:10:48.919745  802302 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0111 09:10:48.919853  802302 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/config.json ...
	I0111 09:10:48.919869  802302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/config.json: {Name:mke48f3202699897d0caba5bd6578aa672b3bcb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 09:10:48.920041  802302 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 09:10:48.945413  802302 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 09:10:48.945432  802302 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 09:10:48.945446  802302 cache.go:243] Successfully downloaded all kic artifacts
	I0111 09:10:48.945481  802302 start.go:360] acquireMachinesLock for auto-293572: {Name:mk854ada6bf3fadc3860bd06ffe8ab994cd6ddec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 09:10:48.945580  802302 start.go:364] duration metric: took 83.603µs to acquireMachinesLock for "auto-293572"
	I0111 09:10:48.945606  802302 start.go:93] Provisioning new machine with config: &{Name:auto-293572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-293572 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0111 09:10:48.945669  802302 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.205397457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.209727137Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b8b5acc4-edde-419b-8b08-59726ef59333 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.218477026Z" level=info msg="Ran pod sandbox 1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d with infra container: kube-system/kindnet-nnd7m/POD" id=b8b5acc4-edde-419b-8b08-59726ef59333 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.220147004Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=ecd4713b-23fd-4b69-889c-041c72d6b739 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.223718685Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=7288853a-c1d9-46d4-96db-bb0b4f62b2be name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.225326821Z" level=info msg="Creating container: kube-system/kindnet-nnd7m/kindnet-cni" id=edfd8a7e-d52c-41ca-a0c6-a5f1e5eb1ee0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.225611198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.22838208Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-nvrgg/POD" id=8939bdb4-0383-4dd8-896a-aef974e76b19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.228555047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.246038996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.258895481Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8939bdb4-0383-4dd8-896a-aef974e76b19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.259377243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.271824829Z" level=info msg="Ran pod sandbox 2bf96d26d55bdc1cdef614575c01a9ab6ede487f2f755f245122069bafb322ba with infra container: kube-system/kube-proxy-nvrgg/POD" id=8939bdb4-0383-4dd8-896a-aef974e76b19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.273406954Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=908fb5fd-302d-4ba2-9ada-bcc997c6f68c name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.278855434Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=aa9e4858-ca0c-4a27-8608-66e2aae77a38 name=/runtime.v1.ImageService/ImageStatus
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.285214367Z" level=info msg="Creating container: kube-system/kube-proxy-nvrgg/kube-proxy" id=19f6528f-9200-4392-999f-4bd5df8b507c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.287641779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.334506126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.335696032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.385797812Z" level=info msg="Created container 40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0: kube-system/kindnet-nnd7m/kindnet-cni" id=edfd8a7e-d52c-41ca-a0c6-a5f1e5eb1ee0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.388603698Z" level=info msg="Starting container: 40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0" id=b11c9113-4c04-4a05-9680-352963c6241e name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.390923638Z" level=info msg="Started container" PID=1075 containerID=40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0 description=kube-system/kindnet-nnd7m/kindnet-cni id=b11c9113-4c04-4a05-9680-352963c6241e name=/runtime.v1.RuntimeService/StartContainer sandboxID=1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.473131536Z" level=info msg="Created container 0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9: kube-system/kube-proxy-nvrgg/kube-proxy" id=19f6528f-9200-4392-999f-4bd5df8b507c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.474646132Z" level=info msg="Starting container: 0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9" id=8aea4438-001b-46d2-9f70-040ef1aea639 name=/runtime.v1.RuntimeService/StartContainer
	Jan 11 09:10:43 newest-cni-193049 crio[616]: time="2026-01-11T09:10:43.483673604Z" level=info msg="Started container" PID=1079 containerID=0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9 description=kube-system/kube-proxy-nvrgg/kube-proxy id=8aea4438-001b-46d2-9f70-040ef1aea639 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bf96d26d55bdc1cdef614575c01a9ab6ede487f2f755f245122069bafb322ba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0c0a1d3861ec1       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   8 seconds ago       Running             kube-proxy                1                   2bf96d26d55bd       kube-proxy-nvrgg                            kube-system
	40ece3d708cfa       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   8 seconds ago       Running             kindnet-cni               1                   1ec30b9a3778c       kindnet-nnd7m                               kube-system
	5eddea1638242       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   14 seconds ago      Running             kube-apiserver            1                   60837a2ce89b0       kube-apiserver-newest-cni-193049            kube-system
	ba42ff7aa98ef       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   14 seconds ago      Running             kube-scheduler            1                   f6676d89cb732       kube-scheduler-newest-cni-193049            kube-system
	077d5b5899b12       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   14 seconds ago      Running             kube-controller-manager   1                   60448f4462a7c       kube-controller-manager-newest-cni-193049   kube-system
	fe1ddcabdd0fe       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   14 seconds ago      Running             etcd                      1                   610e3fab78beb       etcd-newest-cni-193049                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-193049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-193049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04
	                    minikube.k8s.io/name=newest-cni-193049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_11T09_10_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 11 Jan 2026 09:10:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-193049
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 11 Jan 2026 09:10:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 11 Jan 2026 09:10:42 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 11 Jan 2026 09:10:42 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 11 Jan 2026 09:10:42 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 11 Jan 2026 09:10:42 +0000   Sun, 11 Jan 2026 09:10:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-193049
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ddae311f11c7b76b67dd5269620bc7
	  System UUID:                fd89335c-cfbd-4c1f-a796-6c2f717b69b5
	  Boot ID:                    c56b18f5-eaa7-4e61-ae5e-77e4c72f404f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-193049                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-nnd7m                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-193049             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-193049    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-nvrgg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-193049             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node newest-cni-193049 event: Registered Node newest-cni-193049 in Controller
	  Normal  RegisteredNode  6s    node-controller  Node newest-cni-193049 event: Registered Node newest-cni-193049 in Controller
	
	
	==> dmesg <==
	[Jan11 08:39] overlayfs: idmapped layers are currently not supported
	[Jan11 08:40] overlayfs: idmapped layers are currently not supported
	[  +3.911531] overlayfs: idmapped layers are currently not supported
	[Jan11 08:41] overlayfs: idmapped layers are currently not supported
	[ +22.212213] overlayfs: idmapped layers are currently not supported
	[Jan11 08:42] overlayfs: idmapped layers are currently not supported
	[ +33.482374] overlayfs: idmapped layers are currently not supported
	[Jan11 08:44] overlayfs: idmapped layers are currently not supported
	[Jan11 08:46] overlayfs: idmapped layers are currently not supported
	[Jan11 08:47] overlayfs: idmapped layers are currently not supported
	[Jan11 08:53] overlayfs: idmapped layers are currently not supported
	[Jan11 08:54] overlayfs: idmapped layers are currently not supported
	[Jan11 08:55] overlayfs: idmapped layers are currently not supported
	[Jan11 08:56] overlayfs: idmapped layers are currently not supported
	[Jan11 09:02] overlayfs: idmapped layers are currently not supported
	[ +34.353574] overlayfs: idmapped layers are currently not supported
	[Jan11 09:03] overlayfs: idmapped layers are currently not supported
	[Jan11 09:04] overlayfs: idmapped layers are currently not supported
	[Jan11 09:06] overlayfs: idmapped layers are currently not supported
	[Jan11 09:07] overlayfs: idmapped layers are currently not supported
	[Jan11 09:08] overlayfs: idmapped layers are currently not supported
	[ +12.491892] overlayfs: idmapped layers are currently not supported
	[Jan11 09:09] overlayfs: idmapped layers are currently not supported
	[Jan11 09:10] overlayfs: idmapped layers are currently not supported
	[ +26.297928] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fe1ddcabdd0feee6caa75b3c5ec70c2524136a48321b2673b7aba4f2c7858a22] <==
	{"level":"info","ts":"2026-01-11T09:10:37.917970Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-11T09:10:37.917998Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:10:37.918052Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:10:37.918183Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-11T09:10:37.939143Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-11T09:10:37.957207Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-11T09:10:37.960765Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-11T09:10:38.134483Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:38.134550Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:38.134628Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-11T09:10:38.134668Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:10:38.134686Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-11T09:10:38.146280Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:10:38.146346Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-11T09:10:38.146369Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-11T09:10:38.146379Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-11T09:10:38.169954Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-193049 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-11T09:10:38.170120Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:10:38.170439Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-11T09:10:38.171337Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:10:38.173197Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-11T09:10:38.230857Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-11T09:10:38.231702Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-11T09:10:38.231924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-11T09:10:38.231967Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:10:52 up  3:53,  0 user,  load average: 4.25, 2.68, 2.20
	Linux newest-cni-193049 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40ece3d708cfa40ae7090a2d4cbf5a39ff4671cdc81d2f244547d06612de1fa0] <==
	I0111 09:10:43.552962       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0111 09:10:43.553488       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0111 09:10:43.559522       1 main.go:148] setting mtu 1500 for CNI 
	I0111 09:10:43.559554       1 main.go:178] kindnetd IP family: "ipv4"
	I0111 09:10:43.559568       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-11T09:10:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0111 09:10:43.842785       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0111 09:10:43.842836       1 controller.go:381] "Waiting for informer caches to sync"
	I0111 09:10:43.842848       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0111 09:10:43.843948       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [5eddea163824216d5ba9de164f946784ce9b2fc07f12c7275e2cbdcd8c651795] <==
	I0111 09:10:42.771349       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0111 09:10:42.771357       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0111 09:10:42.771478       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:42.771486       1 policy_source.go:248] refreshing policies
	I0111 09:10:42.772561       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0111 09:10:42.772653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0111 09:10:42.794347       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0111 09:10:42.794624       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0111 09:10:42.796939       1 cache.go:39] Caches are synced for autoregister controller
	I0111 09:10:42.807989       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0111 09:10:42.815388       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0111 09:10:42.817385       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0111 09:10:42.840581       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E0111 09:10:42.910334       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0111 09:10:42.991111       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0111 09:10:43.262729       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0111 09:10:43.838497       1 controller.go:667] quota admission added evaluator for: namespaces
	I0111 09:10:43.977085       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0111 09:10:44.024859       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0111 09:10:44.060282       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0111 09:10:44.227887       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.8.206"}
	I0111 09:10:44.259659       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.254.185"}
	I0111 09:10:46.021167       1 controller.go:667] quota admission added evaluator for: endpoints
	I0111 09:10:46.223605       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0111 09:10:46.376808       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [077d5b5899b12e4a9bac7509c5d458e1b6a1cb11d82ff1a896def42983b440da] <==
	I0111 09:10:45.919715       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0111 09:10:45.919864       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920011       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920036       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920077       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920096       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920144       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920179       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920406       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920605       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920703       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.920926       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.921920       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.923643       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.928023       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.928184       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.929552       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.929590       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0111 09:10:45.929601       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0111 09:10:45.929696       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.929722       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.929739       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:45.958118       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-193049"
	I0111 09:10:45.958209       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0111 09:10:45.974302       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [0c0a1d3861ec1b16f1579a43d185feafeb13ca5df71ea047daec3d2640cd8fa9] <==
	I0111 09:10:43.768992       1 server_linux.go:53] "Using iptables proxy"
	I0111 09:10:44.064238       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:10:44.266518       1 shared_informer.go:377] "Caches are synced"
	I0111 09:10:44.266578       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0111 09:10:44.266686       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0111 09:10:44.322941       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0111 09:10:44.322995       1 server_linux.go:136] "Using iptables Proxier"
	I0111 09:10:44.329811       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0111 09:10:44.330611       1 server.go:529] "Version info" version="v1.35.0"
	I0111 09:10:44.330628       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:10:44.334353       1 config.go:106] "Starting endpoint slice config controller"
	I0111 09:10:44.334370       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0111 09:10:44.334708       1 config.go:200] "Starting service config controller"
	I0111 09:10:44.334715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0111 09:10:44.335025       1 config.go:403] "Starting serviceCIDR config controller"
	I0111 09:10:44.335032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0111 09:10:44.335433       1 config.go:309] "Starting node config controller"
	I0111 09:10:44.335441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0111 09:10:44.335451       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0111 09:10:44.435269       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0111 09:10:44.435317       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0111 09:10:44.435330       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ba42ff7aa98ef005557c1a5f9ca85205c342efbd7d41c0d11e093ac4234e2f9f] <==
	I0111 09:10:40.295455       1 serving.go:386] Generated self-signed cert in-memory
	W0111 09:10:42.590513       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0111 09:10:42.590823       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0111 09:10:42.590839       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0111 09:10:42.590846       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0111 09:10:42.752127       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0111 09:10:42.752240       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0111 09:10:42.774227       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0111 09:10:42.774278       1 shared_informer.go:370] "Waiting for caches to sync"
	I0111 09:10:42.778602       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0111 09:10:42.778996       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0111 09:10:42.878447       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.903942     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-193049" containerName="kube-apiserver"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.904292     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-193049" containerName="kube-scheduler"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.904546     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-193049" containerName="etcd"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.912042     737 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.912124     737 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.912487     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-193049\" already exists" pod="kube-system/kube-controller-manager-newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.912504     737 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.918856     737 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.919186     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-193049" containerName="kube-controller-manager"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: E0111 09:10:42.962478     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-193049\" already exists" pod="kube-system/kube-scheduler-newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.962521     737 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-193049"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.969091     737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973089     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-cni-cfg\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973133     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7eff21d-1b08-4787-ae22-091ae53fe50c-xtables-lock\") pod \"kube-proxy-nvrgg\" (UID: \"e7eff21d-1b08-4787-ae22-091ae53fe50c\") " pod="kube-system/kube-proxy-nvrgg"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973153     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-xtables-lock\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973183     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7eff21d-1b08-4787-ae22-091ae53fe50c-lib-modules\") pod \"kube-proxy-nvrgg\" (UID: \"e7eff21d-1b08-4787-ae22-091ae53fe50c\") " pod="kube-system/kube-proxy-nvrgg"
	Jan 11 09:10:42 newest-cni-193049 kubelet[737]: I0111 09:10:42.973212     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dc3259e-2cc0-400d-b23f-8e9c3620cf32-lib-modules\") pod \"kindnet-nnd7m\" (UID: \"5dc3259e-2cc0-400d-b23f-8e9c3620cf32\") " pod="kube-system/kindnet-nnd7m"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: E0111 09:10:43.006888     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-193049\" already exists" pod="kube-system/etcd-newest-cni-193049"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: I0111 09:10:43.006927     737 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-193049"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: I0111 09:10:43.014755     737 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: E0111 09:10:43.075928     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-193049\" already exists" pod="kube-system/kube-apiserver-newest-cni-193049"
	Jan 11 09:10:43 newest-cni-193049 kubelet[737]: W0111 09:10:43.216962     737 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/40fddecbe5bf26b3d5c5656a0880f4688df90c6e4ad88e0794c97c773ca94d73/crio-1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d WatchSource:0}: Error finding container 1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d: Status 404 returned error can't find the container with id 1ec30b9a3778c2896a5f5b68d29d0c565409f2b40d509cf5196b549c24255d7d
	Jan 11 09:10:46 newest-cni-193049 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 11 09:10:46 newest-cni-193049 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 11 09:10:46 newest-cni-193049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-193049 -n newest-cni-193049
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-193049 -n newest-cni-193049: exit status 2 (432.972245ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-193049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-4qsbm storage-provisioner dashboard-metrics-scraper-867fb5f87b-vcd88 kubernetes-dashboard-b84665fb8-v2l9s
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-193049 describe pod coredns-7d764666f9-4qsbm storage-provisioner dashboard-metrics-scraper-867fb5f87b-vcd88 kubernetes-dashboard-b84665fb8-v2l9s
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-193049 describe pod coredns-7d764666f9-4qsbm storage-provisioner dashboard-metrics-scraper-867fb5f87b-vcd88 kubernetes-dashboard-b84665fb8-v2l9s: exit status 1 (97.601548ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-4qsbm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-vcd88" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-v2l9s" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-193049 describe pod coredns-7d764666f9-4qsbm storage-provisioner dashboard-metrics-scraper-867fb5f87b-vcd88 kubernetes-dashboard-b84665fb8-v2l9s: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.18s)
E0111 09:16:13.918079  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:39.514096  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:39.519554  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:39.529907  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:39.550222  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:39.590555  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:39.670924  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:39.831385  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:40.152022  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:40.792907  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:42.073129  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:42.825616  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:44.633740  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:47.439536  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:47.444964  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:47.455297  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:47.475594  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:47.515955  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:47.596273  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:47.756699  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:48.077244  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:48.717761  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:49.754022  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:49.998487  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:51.590430  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:52.559006  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:16:57.679665  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (274/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.08
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.35.0/json-events 3.31
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.1
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.65
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.14
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.18
27 TestAddons/Setup 195.11
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.8
48 TestAddons/StoppedEnableDisable 12.37
49 TestCertOptions 30.15
50 TestCertExpiration 224.01
58 TestErrorSpam/setup 26.81
59 TestErrorSpam/start 0.81
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 6.29
62 TestErrorSpam/unpause 6.42
63 TestErrorSpam/stop 1.53
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 44.36
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.14
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.91
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 27.69
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.45
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4.01
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 13.47
91 TestFunctional/parallel/DryRun 0.54
92 TestFunctional/parallel/InternationalLanguage 0.25
93 TestFunctional/parallel/StatusCmd 1.19
97 TestFunctional/parallel/ServiceCmdConnect 6.6
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 20.83
101 TestFunctional/parallel/SSHCmd 0.57
102 TestFunctional/parallel/CpCmd 2.09
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.19
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
113 TestFunctional/parallel/License 0.36
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.73
116 TestFunctional/parallel/ImageCommands/ImageListShort 1.9
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
121 TestFunctional/parallel/ImageCommands/Setup 0.71
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
127 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.33
138 TestFunctional/parallel/ServiceCmd/List 0.39
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
141 TestFunctional/parallel/ServiceCmd/Format 0.39
142 TestFunctional/parallel/ServiceCmd/URL 0.38
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
150 TestFunctional/parallel/ProfileCmd/profile_list 0.53
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
152 TestFunctional/parallel/MountCmd/any-port 8.04
153 TestFunctional/parallel/MountCmd/specific-port 2.5
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.58
155 TestFunctional/delete_echo-server_images 0.07
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 153.84
163 TestMultiControlPlane/serial/DeployApp 6.78
164 TestMultiControlPlane/serial/PingHostFromPods 1.5
165 TestMultiControlPlane/serial/AddWorkerNode 32.55
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 19.94
169 TestMultiControlPlane/serial/StopSecondaryNode 12.87
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.92
171 TestMultiControlPlane/serial/RestartSecondaryNode 21.56
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 111.03
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.48
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.09
177 TestMultiControlPlane/serial/RestartCluster 71.5
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.05
179 TestMultiControlPlane/serial/AddSecondaryNode 48.32
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.13
185 TestJSONOutput/start/Command 45.98
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.88
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.26
210 TestKicCustomNetwork/create_custom_network 33.82
211 TestKicCustomNetwork/use_default_bridge_network 32.22
212 TestKicExistingNetwork 28.63
213 TestKicCustomSubnet 26.64
214 TestKicStaticIP 31.89
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 62.89
219 TestMountStart/serial/StartWithMountFirst 8.97
220 TestMountStart/serial/VerifyMountFirst 0.4
221 TestMountStart/serial/StartWithMountSecond 8.73
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 8.51
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 75.03
231 TestMultiNode/serial/DeployApp2Nodes 5.37
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 28.83
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.77
236 TestMultiNode/serial/CopyFile 10.69
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.03
239 TestMultiNode/serial/RestartKeepsNodes 77.12
240 TestMultiNode/serial/DeleteNode 5.53
241 TestMultiNode/serial/StopMultiNode 24.02
242 TestMultiNode/serial/RestartMultiNode 54.1
243 TestMultiNode/serial/ValidateNameConflict 29.49
250 TestScheduledStopUnix 103.6
253 TestInsufficientStorage 12.51
254 TestRunningBinaryUpgrade 313.38
256 TestKubernetesUpgrade 100.2
257 TestMissingContainerUpgrade 114.12
259 TestPause/serial/Start 55.98
260 TestPause/serial/SecondStartNoReconfiguration 120.61
262 TestStoppedBinaryUpgrade/Setup 0.81
263 TestStoppedBinaryUpgrade/Upgrade 307.93
264 TestStoppedBinaryUpgrade/MinikubeLogs 2.1
272 TestPreload/Start-NoPreload-PullImage 71.86
273 TestPreload/Restart-With-Preload-Check-User-Image 45.5
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
277 TestNoKubernetes/serial/StartWithK8s 29.06
278 TestNoKubernetes/serial/StartWithStopK8s 6.43
279 TestNoKubernetes/serial/Start 7.98
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
282 TestNoKubernetes/serial/ProfileList 1.02
283 TestNoKubernetes/serial/Stop 1.31
284 TestNoKubernetes/serial/StartNoArgs 7.39
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
293 TestNetworkPlugins/group/false 3.62
298 TestStartStop/group/old-k8s-version/serial/FirstStart 61.56
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.42
301 TestStartStop/group/old-k8s-version/serial/Stop 12
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/old-k8s-version/serial/SecondStart 47.23
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
309 TestStartStop/group/no-preload/serial/FirstStart 56.1
310 TestStartStop/group/no-preload/serial/DeployApp 9.3
312 TestStartStop/group/no-preload/serial/Stop 12
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
314 TestStartStop/group/no-preload/serial/SecondStart 49.7
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/embed-certs/serial/FirstStart 47.84
321 TestStartStop/group/embed-certs/serial/DeployApp 9.41
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.24
325 TestStartStop/group/embed-certs/serial/Stop 12.19
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
327 TestStartStop/group/embed-certs/serial/SecondStart 54.38
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.41
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.49
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.16
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
338 TestStartStop/group/newest-cni/serial/FirstStart 33.52
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
341 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
345 TestStartStop/group/newest-cni/serial/Stop 1.68
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.3
347 TestStartStop/group/newest-cni/serial/SecondStart 16.41
348 TestPreload/PreloadSrc/gcs 5.38
349 TestPreload/PreloadSrc/github 5.46
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
354 TestPreload/PreloadSrc/gcs-cached 1.19
355 TestNetworkPlugins/group/auto/Start 50.29
356 TestNetworkPlugins/group/kindnet/Start 51.11
357 TestNetworkPlugins/group/auto/KubeletFlags 0.32
358 TestNetworkPlugins/group/auto/NetCatPod 11.35
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/auto/DNS 0.16
361 TestNetworkPlugins/group/auto/Localhost 0.14
362 TestNetworkPlugins/group/auto/HairPin 0.14
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
364 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
365 TestNetworkPlugins/group/kindnet/DNS 0.21
366 TestNetworkPlugins/group/kindnet/Localhost 0.17
367 TestNetworkPlugins/group/kindnet/HairPin 0.21
368 TestNetworkPlugins/group/calico/Start 74.27
369 TestNetworkPlugins/group/custom-flannel/Start 59.61
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
373 TestNetworkPlugins/group/calico/KubeletFlags 0.3
374 TestNetworkPlugins/group/calico/NetCatPod 10.29
375 TestNetworkPlugins/group/custom-flannel/DNS 0.17
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
378 TestNetworkPlugins/group/calico/DNS 0.19
379 TestNetworkPlugins/group/calico/Localhost 0.12
380 TestNetworkPlugins/group/calico/HairPin 0.13
381 TestNetworkPlugins/group/enable-default-cni/Start 67.38
382 TestNetworkPlugins/group/flannel/Start 57.94
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
385 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
387 TestNetworkPlugins/group/flannel/NetCatPod 9.31
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
391 TestNetworkPlugins/group/flannel/DNS 0.15
392 TestNetworkPlugins/group/flannel/Localhost 0.14
393 TestNetworkPlugins/group/flannel/HairPin 0.13
394 TestNetworkPlugins/group/bridge/Start 69.13
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
396 TestNetworkPlugins/group/bridge/NetCatPod 11.29
397 TestNetworkPlugins/group/bridge/DNS 0.15
398 TestNetworkPlugins/group/bridge/Localhost 0.13
399 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (7.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-639464 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-639464 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.078354485s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0111 08:13:42.338615  576907 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0111 08:13:42.338688  576907 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-639464
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-639464: exit status 85 (94.682075ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-639464 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-639464 │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:13:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:13:35.308215  576912 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:13:35.308607  576912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:13:35.308620  576912 out.go:374] Setting ErrFile to fd 2...
	I0111 08:13:35.308627  576912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:13:35.309408  576912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	W0111 08:13:35.309751  576912 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22402-575040/.minikube/config/config.json: open /home/jenkins/minikube-integration/22402-575040/.minikube/config/config.json: no such file or directory
	I0111 08:13:35.310478  576912 out.go:368] Setting JSON to true
	I0111 08:13:35.311577  576912 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10565,"bootTime":1768108650,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:13:35.311776  576912 start.go:143] virtualization:  
	I0111 08:13:35.318168  576912 out.go:99] [download-only-639464] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0111 08:13:35.318372  576912 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball: no such file or directory
	I0111 08:13:35.318415  576912 notify.go:221] Checking for updates...
	I0111 08:13:35.321688  576912 out.go:171] MINIKUBE_LOCATION=22402
	I0111 08:13:35.324959  576912 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:13:35.328132  576912 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:13:35.331327  576912 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:13:35.334329  576912 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0111 08:13:35.340399  576912 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0111 08:13:35.340748  576912 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:13:35.362011  576912 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:13:35.362114  576912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:13:35.440389  576912 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-11 08:13:35.431229927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:13:35.440500  576912 docker.go:319] overlay module found
	I0111 08:13:35.443599  576912 out.go:99] Using the docker driver based on user configuration
	I0111 08:13:35.443641  576912 start.go:309] selected driver: docker
	I0111 08:13:35.443651  576912 start.go:928] validating driver "docker" against <nil>
	I0111 08:13:35.443772  576912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:13:35.500281  576912 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-11 08:13:35.49055969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:13:35.500451  576912 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:13:35.500748  576912 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0111 08:13:35.500907  576912 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:13:35.504097  576912 out.go:171] Using Docker driver with root privileges
	I0111 08:13:35.507249  576912 cni.go:84] Creating CNI manager for ""
	I0111 08:13:35.507334  576912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0111 08:13:35.507350  576912 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 08:13:35.507435  576912 start.go:353] cluster config:
	{Name:download-only-639464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-639464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:13:35.510661  576912 out.go:99] Starting "download-only-639464" primary control-plane node in "download-only-639464" cluster
	I0111 08:13:35.510686  576912 cache.go:134] Beginning downloading kic base image for docker with crio
	I0111 08:13:35.513662  576912 out.go:99] Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:13:35.513718  576912 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 08:13:35.513819  576912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:13:35.529541  576912 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 08:13:35.529750  576912 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory
	I0111 08:13:35.529859  576912 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 08:13:35.577057  576912 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0111 08:13:35.577083  576912 cache.go:65] Caching tarball of preloaded images
	I0111 08:13:35.577248  576912 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 08:13:35.580629  576912 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0111 08:13:35.580651  576912 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0111 08:13:35.580657  576912 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I0111 08:13:35.675591  576912 preload.go:313] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I0111 08:13:35.675720  576912 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0111 08:13:39.376876  576912 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0111 08:13:39.377309  576912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/download-only-639464/config.json ...
	I0111 08:13:39.377349  576912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/download-only-639464/config.json: {Name:mk02177beb5eaca6a9981a62797c65f7eec8d183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:13:39.377534  576912 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0111 08:13:39.377750  576912 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22402-575040/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-639464 host does not exist
	  To start a cluster, run: "minikube start -p download-only-639464"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-639464
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-637593 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-637593 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.311629541s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0111 08:13:46.095614  576907 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I0111 08:13:46.095652  576907 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-637593
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-637593: exit status 85 (95.524334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-639464 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-639464 │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ delete  │ -p download-only-639464                                                                                                                                                   │ download-only-639464 │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	│ start   │ -o=json --download-only -p download-only-637593 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-637593 │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:13:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:13:42.826745  577112 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:13:42.826877  577112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:13:42.826888  577112 out.go:374] Setting ErrFile to fd 2...
	I0111 08:13:42.826893  577112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:13:42.827157  577112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:13:42.827562  577112 out.go:368] Setting JSON to true
	I0111 08:13:42.828345  577112 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10573,"bootTime":1768108650,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:13:42.828416  577112 start.go:143] virtualization:  
	I0111 08:13:42.831731  577112 out.go:99] [download-only-637593] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:13:42.831922  577112 notify.go:221] Checking for updates...
	I0111 08:13:42.834934  577112 out.go:171] MINIKUBE_LOCATION=22402
	I0111 08:13:42.837885  577112 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:13:42.840840  577112 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:13:42.843833  577112 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:13:42.846670  577112 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0111 08:13:42.852586  577112 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0111 08:13:42.852875  577112 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:13:42.879781  577112 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:13:42.879886  577112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:13:42.940306  577112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-11 08:13:42.930898815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:13:42.940424  577112 docker.go:319] overlay module found
	I0111 08:13:42.943486  577112 out.go:99] Using the docker driver based on user configuration
	I0111 08:13:42.943527  577112 start.go:309] selected driver: docker
	I0111 08:13:42.943535  577112 start.go:928] validating driver "docker" against <nil>
	I0111 08:13:42.943644  577112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:13:42.999028  577112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-11 08:13:42.989699349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:13:42.999202  577112 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:13:42.999490  577112 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0111 08:13:42.999638  577112 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:13:43.003892  577112 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-637593 host does not exist
	  To start a cluster, run: "minikube start -p download-only-637593"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-637593
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I0111 08:13:47.254942  576907 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-442784 --alsologtostderr --binary-mirror http://127.0.0.1:34909 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-442784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-442784
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.14s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-328805
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-328805: exit status 85 (142.850598ms)

                                                
                                                
-- stdout --
	* Profile "addons-328805" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-328805"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.14s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-328805
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-328805: exit status 85 (174.891141ms)

                                                
                                                
-- stdout --
	* Profile "addons-328805" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-328805"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/Setup (195.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-328805 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-328805 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m15.109617483s)
--- PASS: TestAddons/Setup (195.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-328805 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-328805 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.8s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-328805 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-328805 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [43d6c10f-542c-4028-be58-ea31a363fd10] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [43d6c10f-542c-4028-be58-ea31a363fd10] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004130005s
addons_test.go:696: (dbg) Run:  kubectl --context addons-328805 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-328805 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-328805 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-328805 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-328805
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-328805: (12.095087389s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-328805
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-328805
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-328805
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (30.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-459267 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0111 09:02:04.134292  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:02:08.542658  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-459267 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.348046918s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-459267 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-459267 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-459267 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-459267" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-459267
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-459267: (2.079687827s)
--- PASS: TestCertOptions (30.15s)

                                                
                                    
x
+
TestCertExpiration (224.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-448134 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-448134 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.061375993s)
E0111 08:57:04.134813  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:57:08.541303  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-448134 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-448134 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.444119275s)
helpers_test.go:176: Cleaning up "cert-expiration-448134" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-448134
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-448134: (2.502096924s)
--- PASS: TestCertExpiration (224.01s)

                                                
                                    
x
+
TestErrorSpam/setup (26.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-222067 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-222067 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-222067 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-222067 --driver=docker  --container-runtime=crio: (26.808095404s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (26.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (6.29s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 pause: exit status 80 (2.317131514s)

                                                
                                                
-- stdout --
	* Pausing node nospam-222067 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:19:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 pause: exit status 80 (2.382268214s)

                                                
                                                
-- stdout --
	* Pausing node nospam-222067 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:19:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 pause: exit status 80 (1.590610679s)

                                                
                                                
-- stdout --
	* Pausing node nospam-222067 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:19:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.29s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 unpause: exit status 80 (1.978723173s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-222067 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:19:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 unpause: exit status 80 (2.25313863s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-222067 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:19:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 unpause: exit status 80 (2.189531712s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-222067 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-11T08:20:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.42s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 stop: (1.335315042s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-222067 --log_dir /tmp/nospam-222067 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22402-575040/.minikube/files/etc/test/nested/copy/576907/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-952579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-952579 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (44.357604167s)
--- PASS: TestFunctional/serial/StartWithProxy (44.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0111 08:20:51.555984  576907 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-952579 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-952579 --alsologtostderr -v=8: (29.142416976s)
functional_test.go:678: soft start took 29.142904834s for "functional-952579" cluster.
I0111 08:21:20.698672  576907 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (29.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-952579 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-952579 cache add registry.k8s.io/pause:3.1: (1.306478301s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-952579 cache add registry.k8s.io/pause:3.3: (1.323439202s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-952579 cache add registry.k8s.io/pause:latest: (1.278455484s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-952579 /tmp/TestFunctionalserialCacheCmdcacheadd_local1682127949/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 cache add minikube-local-cache-test:functional-952579
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 cache delete minikube-local-cache-test:functional-952579
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-952579
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.490055ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 kubectl -- --context functional-952579 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-952579 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (27.69s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-952579 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-952579 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (27.692263233s)
functional_test.go:776: restart took 27.69235226s for "functional-952579" cluster.
I0111 08:21:56.499633  576907 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (27.69s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-952579 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-952579 logs: (1.449189393s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 logs --file /tmp/TestFunctionalserialLogsFileCmd3099689516/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-952579 logs --file /tmp/TestFunctionalserialLogsFileCmd3099689516/001/logs.txt: (1.479848009s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-952579 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-952579
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-952579: exit status 115 (383.906006ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30619 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-952579 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 config get cpus: exit status 14 (66.878449ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 config get cpus: exit status 14 (66.380178ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-952579 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-952579 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 602794: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-952579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-952579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (207.1103ms)

                                                
                                                
-- stdout --
	* [functional-952579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:22:38.333519  602152 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:22:38.333664  602152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:22:38.333687  602152 out.go:374] Setting ErrFile to fd 2...
	I0111 08:22:38.333705  602152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:22:38.333986  602152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:22:38.334434  602152 out.go:368] Setting JSON to false
	I0111 08:22:38.335344  602152 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11108,"bootTime":1768108650,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:22:38.335408  602152 start.go:143] virtualization:  
	I0111 08:22:38.339298  602152 out.go:179] * [functional-952579] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:22:38.342279  602152 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:22:38.342353  602152 notify.go:221] Checking for updates...
	I0111 08:22:38.348859  602152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:22:38.351798  602152 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:22:38.354725  602152 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:22:38.357538  602152 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:22:38.360410  602152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:22:38.363742  602152 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:22:38.364310  602152 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:22:38.394337  602152 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:22:38.394461  602152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:22:38.470164  602152 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 08:22:38.46064066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:22:38.470264  602152 docker.go:319] overlay module found
	I0111 08:22:38.473439  602152 out.go:179] * Using the docker driver based on existing profile
	I0111 08:22:38.476265  602152 start.go:309] selected driver: docker
	I0111 08:22:38.476285  602152 start.go:928] validating driver "docker" against &{Name:functional-952579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-952579 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:22:38.476383  602152 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:22:38.479873  602152 out.go:203] 
	W0111 08:22:38.482771  602152 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0111 08:22:38.485665  602152 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-952579 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-952579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-952579 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (245.868594ms)

                                                
                                                
-- stdout --
	* [functional-952579] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:22:38.902855  602336 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:22:38.903117  602336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:22:38.903132  602336 out.go:374] Setting ErrFile to fd 2...
	I0111 08:22:38.903137  602336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:22:38.903693  602336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:22:38.904085  602336 out.go:368] Setting JSON to false
	I0111 08:22:38.905020  602336 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11109,"bootTime":1768108650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:22:38.905091  602336 start.go:143] virtualization:  
	I0111 08:22:38.909084  602336 out.go:179] * [functional-952579] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0111 08:22:38.913809  602336 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:22:38.913894  602336 notify.go:221] Checking for updates...
	I0111 08:22:38.921015  602336 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:22:38.924298  602336 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:22:38.927445  602336 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:22:38.930644  602336 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:22:38.933722  602336 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:22:38.937302  602336 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:22:38.937868  602336 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:22:38.969359  602336 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:22:38.969475  602336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:22:39.050518  602336 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 08:22:39.040163546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:22:39.050662  602336 docker.go:319] overlay module found
	I0111 08:22:39.053882  602336 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0111 08:22:39.056826  602336 start.go:309] selected driver: docker
	I0111 08:22:39.056846  602336 start.go:928] validating driver "docker" against &{Name:functional-952579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-952579 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:22:39.056940  602336 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:22:39.060601  602336 out.go:203] 
	W0111 08:22:39.063528  602336 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0111 08:22:39.066421  602336 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-952579 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-952579 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-q2vqz" [14b0737e-f25a-42f8-b647-09119404e35e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-q2vqz" [14b0737e-f25a-42f8-b647-09119404e35e] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004238157s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30862
functional_test.go:1685: http://192.168.49.2:30862: success! body:
Request served by hello-node-connect-5d95464fd4-q2vqz

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30862
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [e3f98154-51a9-4a35-9757-6080dc8d5f17] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003183566s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-952579 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-952579 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-952579 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-952579 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [37324ba0-9453-468e-b78d-b92c08a5a18a] Pending
E0111 08:22:24.620035  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "sp-pod" [37324ba0-9453-468e-b78d-b92c08a5a18a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.017338095s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-952579 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-952579 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-952579 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [611e3d61-d8df-49c3-b92b-4a954b5cf67b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [611e3d61-d8df-49c3-b92b-4a954b5cf67b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003160922s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-952579 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh -n functional-952579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 cp functional-952579:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd723711767/001/cp-test.txt
E0111 08:22:04.289681  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh -n functional-952579 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh -n functional-952579 "sudo cat /tmp/does/not/exist/cp-test.txt"
E0111 08:22:05.416711  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/576907/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo cat /etc/test/nested/copy/576907/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/576907.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo cat /etc/ssl/certs/576907.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/576907.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo cat /usr/share/ca-certificates/576907.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo cat /etc/ssl/certs/51391683.0"
E0111 08:22:06.698240  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/5769072.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo cat /etc/ssl/certs/5769072.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/5769072.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo cat /usr/share/ca-certificates/5769072.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-952579 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo systemctl is-active docker"
E0111 08:22:04.454036  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 ssh "sudo systemctl is-active docker": exit status 1 (413.40281ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo systemctl is-active containerd"
E0111 08:22:04.775136  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 ssh "sudo systemctl is-active containerd": exit status 1 (354.365772ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
E0111 08:22:04.132476  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:04.137829  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:04.148111  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:04.168381  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:04.208740  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 version -o=json --components
E0111 08:22:45.102996  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/Version/components (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-arm64 -p functional-952579 image ls --format short --alsologtostderr: (1.899042788s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-952579 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-952579
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-952579 image ls --format short --alsologtostderr:
I0111 08:22:46.422367  603842 out.go:360] Setting OutFile to fd 1 ...
I0111 08:22:46.422575  603842 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:46.422602  603842 out.go:374] Setting ErrFile to fd 2...
I0111 08:22:46.422621  603842 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:46.422902  603842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
I0111 08:22:46.423585  603842 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:46.423773  603842 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:46.424353  603842 cli_runner.go:164] Run: docker container inspect functional-952579 --format={{.State.Status}}
I0111 08:22:46.444114  603842 ssh_runner.go:195] Run: systemctl --version
I0111 08:22:46.444172  603842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952579
I0111 08:22:46.462939  603842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/functional-952579/id_rsa Username:docker}
I0111 08:22:46.569300  603842 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
I0111 08:22:48.249936  603842 ssh_runner.go:235] Completed: sudo crictl --timeout=10s images --output json: (1.680606438s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-952579 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 611c6647fcbbc │ 62.6MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ c3fcf259c473a │ 85MB   │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 88898f1d1a62a │ 72.2MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                             │ latest                                │ 8cb2091f603e7 │ 246kB  │
│ localhost/minikube-local-cache-test               │ functional-952579                     │ c2b6498c04269 │ 3.33kB │
│ localhost/my-image                                │ functional-952579                     │ 3a64dd9668b0d │ 1.64MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 271e49a0ebc56 │ 60.9MB │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/busybox                       │ latest                                │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ e08f4d9d2e6ed │ 74.5MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ de369f46c2ff5 │ 74.1MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ ddc8422d4d35a │ 49.8MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-952579                     │ ce2d2cda2d858 │ 4.79MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ ce2d2cda2d858 │ 4.79MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-952579 image ls --format table --alsologtostderr:
I0111 08:22:52.594214  604301 out.go:360] Setting OutFile to fd 1 ...
I0111 08:22:52.596463  604301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:52.596509  604301 out.go:374] Setting ErrFile to fd 2...
I0111 08:22:52.596529  604301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:52.596836  604301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
I0111 08:22:52.597521  604301 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:52.600015  604301 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:52.600744  604301 cli_runner.go:164] Run: docker container inspect functional-952579 --format={{.State.Status}}
I0111 08:22:52.622650  604301 ssh_runner.go:195] Run: systemctl --version
I0111 08:22:52.622702  604301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952579
I0111 08:22:52.646530  604301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/functional-952579/id_rsa Username:docker}
I0111 08:22:52.754348  604301 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-952579 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"42263767"},{"id":"b823964b5e98da51bcba0a15c418c4a56c8a66e89bb45f004d8103c836f41631","repoDigests":["docker.io/library/e61cadac6559236704760fe04eaf13dea01312d903d8d1f105479bf8706fc939-tmp@sha256:0f1b51a2942b4b45c1939a6e3d1a18d8b125c53024d5ee145d3ac0b96dd3eeac"],"repoTags":[],"size":"1638178"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ce2d2cda2d858fdae
a84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4788229"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb629
4acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf","docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"247562353"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7b
a97220b99c39ee2e162a7127225890","registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"60850387"},{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"85015535"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503","registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"72170321"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0
bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"49822549"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"c2b6498c042699b0f06f7177db59df6b255d9f6389ac21cc51672116905a10ed","repoDigests":["localhost/minikube-local-cache-test@sha256:7b1ddbcaf15a30a775cb666750d3209c0193491251d41d587dbf17e6fea2ed26"],"repoTags":["localhost/minikube-local-cache-test:functional-952579"],"size":"3328"},{"id":"611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c4
0af5371","repoDigests":["public.ecr.aws/nginx/nginx@sha256:be49159753b31dc6d536fca5b044033e1e3e836667959ac238471b2ce50b31b0","public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"62642350"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"74106775"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b1
4890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3","docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba37058
8274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"3a64dd9668b0d8d74ed6f6712d7fe4677031913a018ac39ba7bf495d502a7f83","repoDigests":["localhost/my-image@sha256:4c9a7cf67f03117dcbb4046edf2671d8e8ad03611472a931d31f0a0da05f55b5"],"repoTags":["localhost/my-image:functional-952579"],"size":"1640789"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-952579 image ls --format json --alsologtostderr:
I0111 08:22:52.573810  604297 out.go:360] Setting OutFile to fd 1 ...
I0111 08:22:52.573933  604297 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:52.573944  604297 out.go:374] Setting ErrFile to fd 2...
I0111 08:22:52.573949  604297 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:52.574442  604297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
I0111 08:22:52.579126  604297 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:52.579278  604297 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:52.579832  604297 cli_runner.go:164] Run: docker container inspect functional-952579 --format={{.State.Status}}
I0111 08:22:52.606726  604297 ssh_runner.go:195] Run: systemctl --version
I0111 08:22:52.606787  604297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952579
I0111 08:22:52.625040  604297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/functional-952579/id_rsa Username:docker}
I0111 08:22:52.736907  604297 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-952579 image ls --format yaml --alsologtostderr:
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "72170321"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "74106775"
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4788229"
- id: 611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:be49159753b31dc6d536fca5b044033e1e3e836667959ac238471b2ce50b31b0
- public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "62642350"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "49822549"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: c2b6498c042699b0f06f7177db59df6b255d9f6389ac21cc51672116905a10ed
repoDigests:
- localhost/minikube-local-cache-test@sha256:7b1ddbcaf15a30a775cb666750d3209c0193491251d41d587dbf17e6fea2ed26
repoTags:
- localhost/minikube-local-cache-test:functional-952579
size: "3328"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
- registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60850387"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "42263767"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "85015535"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-952579 image ls --format yaml --alsologtostderr:
I0111 08:22:48.317405  603889 out.go:360] Setting OutFile to fd 1 ...
I0111 08:22:48.317608  603889 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:48.317653  603889 out.go:374] Setting ErrFile to fd 2...
I0111 08:22:48.317687  603889 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:48.318098  603889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
I0111 08:22:48.319827  603889 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:48.319991  603889 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:48.320620  603889 cli_runner.go:164] Run: docker container inspect functional-952579 --format={{.State.Status}}
I0111 08:22:48.349950  603889 ssh_runner.go:195] Run: systemctl --version
I0111 08:22:48.350002  603889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952579
I0111 08:22:48.368452  603889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/functional-952579/id_rsa Username:docker}
I0111 08:22:48.481064  603889 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 ssh pgrep buildkitd: exit status 1 (346.144885ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image build -t localhost/my-image:functional-952579 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-952579 image build -t localhost/my-image:functional-952579 testdata/build --alsologtostderr: (3.378416153s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-952579 image build -t localhost/my-image:functional-952579 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b823964b5e9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-952579
--> 3a64dd9668b
Successfully tagged localhost/my-image:functional-952579
3a64dd9668b0d8d74ed6f6712d7fe4677031913a018ac39ba7bf495d502a7f83
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-952579 image build -t localhost/my-image:functional-952579 testdata/build --alsologtostderr:
I0111 08:22:48.920065  604001 out.go:360] Setting OutFile to fd 1 ...
I0111 08:22:48.920968  604001 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:48.920990  604001 out.go:374] Setting ErrFile to fd 2...
I0111 08:22:48.920996  604001 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 08:22:48.921312  604001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
I0111 08:22:48.922052  604001 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:48.922802  604001 config.go:182] Loaded profile config "functional-952579": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0111 08:22:48.923391  604001 cli_runner.go:164] Run: docker container inspect functional-952579 --format={{.State.Status}}
I0111 08:22:48.943511  604001 ssh_runner.go:195] Run: systemctl --version
I0111 08:22:48.943563  604001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952579
I0111 08:22:48.964085  604001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/functional-952579/id_rsa Username:docker}
I0111 08:22:49.068816  604001 build_images.go:162] Building image from path: /tmp/build.1779032638.tar
I0111 08:22:49.068881  604001 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0111 08:22:49.076796  604001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1779032638.tar
I0111 08:22:49.080526  604001 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1779032638.tar: stat -c "%s %y" /var/lib/minikube/build/build.1779032638.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1779032638.tar': No such file or directory
I0111 08:22:49.080558  604001 ssh_runner.go:362] scp /tmp/build.1779032638.tar --> /var/lib/minikube/build/build.1779032638.tar (3072 bytes)
I0111 08:22:49.108502  604001 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1779032638
I0111 08:22:49.116712  604001 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1779032638 -xf /var/lib/minikube/build/build.1779032638.tar
I0111 08:22:49.125170  604001 crio.go:315] Building image: /var/lib/minikube/build/build.1779032638
I0111 08:22:49.125281  604001 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-952579 /var/lib/minikube/build/build.1779032638 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0111 08:22:52.226777  604001 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-952579 /var/lib/minikube/build/build.1779032638 --cgroup-manager=cgroupfs: (3.10146538s)
I0111 08:22:52.226845  604001 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1779032638
I0111 08:22:52.234669  604001 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1779032638.tar
I0111 08:22:52.242582  604001 build_images.go:218] Built localhost/my-image:functional-952579 from /tmp/build.1779032638.tar
I0111 08:22:52.242612  604001 build_images.go:134] succeeded building to: functional-952579
I0111 08:22:52.242617  604001 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls
2026/01/11 08:22:52 [DEBUG] GET http://127.0.0.1:35453/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-952579 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579 --alsologtostderr: (1.270694466s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-952579 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-952579 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-z5flc" [13c94670-bb7b-42e2-b0fd-7975eb60bd13] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-z5flc" [13c94670-bb7b-42e2-b0fd-7975eb60bd13] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.008054949s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579 --alsologtostderr
E0111 08:22:09.258613  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-952579 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-952579 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-952579 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-952579 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 600288: os: process already finished
helpers_test.go:526: unable to kill pid 600169: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-952579 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-952579 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [ebc6ddb1-91b2-49a3-a2cc-f7be125d6ee6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0111 08:22:14.378902  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "nginx-svc" [ebc6ddb1-91b2-49a3-a2cc-f7be125d6ee6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002794402s
I0111 08:22:22.139652  576907 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 service list -o json
functional_test.go:1509: Took "358.14986ms" to run "out/minikube-linux-arm64 -p functional-952579 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31493
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31493
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-952579 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.108.223 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-952579 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "474.039368ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "54.905245ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "394.67885ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "58.849065ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdany-port3183278209/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768119751761115364" to /tmp/TestFunctionalparallelMountCmdany-port3183278209/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768119751761115364" to /tmp/TestFunctionalparallelMountCmdany-port3183278209/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768119751761115364" to /tmp/TestFunctionalparallelMountCmdany-port3183278209/001/test-1768119751761115364
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (360.96066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0111 08:22:32.122368  576907 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 11 08:22 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 11 08:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 11 08:22 test-1768119751761115364
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh cat /mount-9p/test-1768119751761115364
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-952579 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [eab82bb2-816e-40f1-bfdd-07532b8814a6] Pending
helpers_test.go:353: "busybox-mount" [eab82bb2-816e-40f1-bfdd-07532b8814a6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [eab82bb2-816e-40f1-bfdd-07532b8814a6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [eab82bb2-816e-40f1-bfdd-07532b8814a6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003797953s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-952579 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdany-port3183278209/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdspecific-port3370600579/001:/mount-9p --alsologtostderr -v=1 --port 46383]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (599.803607ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0111 08:22:40.402884  576907 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdspecific-port3370600579/001:/mount-9p --alsologtostderr -v=1 --port 46383] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 ssh "sudo umount -f /mount-9p": exit status 1 (349.60631ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-952579 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdspecific-port3370600579/001:/mount-9p --alsologtostderr -v=1 --port 46383] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258580677/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258580677/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258580677/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T" /mount1: exit status 1 (761.446384ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-952579 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-952579 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258580677/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258580677/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-952579 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258580677/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-952579
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-952579
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-952579
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (153.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0111 08:23:26.070352  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:24:47.991354  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m32.999794985s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (153.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 kubectl -- rollout status deployment/busybox: (4.002569238s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-82m8b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-r9wld -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-tlrld -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-82m8b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-r9wld -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-tlrld -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-82m8b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-r9wld -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-tlrld -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-82m8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-82m8b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-r9wld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-r9wld -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-tlrld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 kubectl -- exec busybox-769dd8b7dd-tlrld -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 node add --alsologtostderr -v 5: (31.41198067s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5: (1.136863413s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-514872 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.078703391s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 status --output json --alsologtostderr -v 5: (1.060370206s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp testdata/cp-test.txt ha-514872:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile488352838/001/cp-test_ha-514872.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872:/home/docker/cp-test.txt ha-514872-m02:/home/docker/cp-test_ha-514872_ha-514872-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m02 "sudo cat /home/docker/cp-test_ha-514872_ha-514872-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872:/home/docker/cp-test.txt ha-514872-m03:/home/docker/cp-test_ha-514872_ha-514872-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m03 "sudo cat /home/docker/cp-test_ha-514872_ha-514872-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872:/home/docker/cp-test.txt ha-514872-m04:/home/docker/cp-test_ha-514872_ha-514872-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m04 "sudo cat /home/docker/cp-test_ha-514872_ha-514872-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp testdata/cp-test.txt ha-514872-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile488352838/001/cp-test_ha-514872-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m02:/home/docker/cp-test.txt ha-514872:/home/docker/cp-test_ha-514872-m02_ha-514872.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872 "sudo cat /home/docker/cp-test_ha-514872-m02_ha-514872.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m02:/home/docker/cp-test.txt ha-514872-m03:/home/docker/cp-test_ha-514872-m02_ha-514872-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m03 "sudo cat /home/docker/cp-test_ha-514872-m02_ha-514872-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m02:/home/docker/cp-test.txt ha-514872-m04:/home/docker/cp-test_ha-514872-m02_ha-514872-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m04 "sudo cat /home/docker/cp-test_ha-514872-m02_ha-514872-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp testdata/cp-test.txt ha-514872-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile488352838/001/cp-test_ha-514872-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m03:/home/docker/cp-test.txt ha-514872:/home/docker/cp-test_ha-514872-m03_ha-514872.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872 "sudo cat /home/docker/cp-test_ha-514872-m03_ha-514872.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m03:/home/docker/cp-test.txt ha-514872-m02:/home/docker/cp-test_ha-514872-m03_ha-514872-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m02 "sudo cat /home/docker/cp-test_ha-514872-m03_ha-514872-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m03:/home/docker/cp-test.txt ha-514872-m04:/home/docker/cp-test_ha-514872-m03_ha-514872-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m04 "sudo cat /home/docker/cp-test_ha-514872-m03_ha-514872-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp testdata/cp-test.txt ha-514872-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile488352838/001/cp-test_ha-514872-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m04:/home/docker/cp-test.txt ha-514872:/home/docker/cp-test_ha-514872-m04_ha-514872.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872 "sudo cat /home/docker/cp-test_ha-514872-m04_ha-514872.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m04:/home/docker/cp-test.txt ha-514872-m02:/home/docker/cp-test_ha-514872-m04_ha-514872-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m02 "sudo cat /home/docker/cp-test_ha-514872-m04_ha-514872-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 cp ha-514872-m04:/home/docker/cp-test.txt ha-514872-m03:/home/docker/cp-test_ha-514872-m04_ha-514872-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 ssh -n ha-514872-m03 "sudo cat /home/docker/cp-test_ha-514872-m04_ha-514872-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 node stop m02 --alsologtostderr -v 5: (12.086237697s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5: exit status 7 (787.25717ms)

                                                
                                                
-- stdout --
	ha-514872
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-514872-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-514872-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-514872-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:26:43.428851  618979 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:26:43.429035  618979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:26:43.429066  618979 out.go:374] Setting ErrFile to fd 2...
	I0111 08:26:43.429090  618979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:26:43.429373  618979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:26:43.429589  618979 out.go:368] Setting JSON to false
	I0111 08:26:43.429654  618979 mustload.go:66] Loading cluster: ha-514872
	I0111 08:26:43.429735  618979 notify.go:221] Checking for updates...
	I0111 08:26:43.431179  618979 config.go:182] Loaded profile config "ha-514872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:26:43.431233  618979 status.go:174] checking status of ha-514872 ...
	I0111 08:26:43.432122  618979 cli_runner.go:164] Run: docker container inspect ha-514872 --format={{.State.Status}}
	I0111 08:26:43.451750  618979 status.go:371] ha-514872 host status = "Running" (err=<nil>)
	I0111 08:26:43.451773  618979 host.go:66] Checking if "ha-514872" exists ...
	I0111 08:26:43.452189  618979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-514872
	I0111 08:26:43.488288  618979 host.go:66] Checking if "ha-514872" exists ...
	I0111 08:26:43.488595  618979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:26:43.488633  618979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-514872
	I0111 08:26:43.508929  618979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/ha-514872/id_rsa Username:docker}
	I0111 08:26:43.615944  618979 ssh_runner.go:195] Run: systemctl --version
	I0111 08:26:43.623095  618979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:26:43.637173  618979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:26:43.698867  618979 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2026-01-11 08:26:43.689048816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:26:43.699485  618979 kubeconfig.go:125] found "ha-514872" server: "https://192.168.49.254:8443"
	I0111 08:26:43.699523  618979 api_server.go:166] Checking apiserver status ...
	I0111 08:26:43.699570  618979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:26:43.711580  618979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup
	I0111 08:26:43.720040  618979 api_server.go:192] apiserver freezer: "10:freezer:/docker/70983aca2913602d7c741282c0e059f621308735fb27ba70594c063981a5af45/crio/crio-15f4b8aec001bce5420ce6974f6808668a8958376c9f4518777d6573ff2e8a20"
	I0111 08:26:43.720122  618979 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/70983aca2913602d7c741282c0e059f621308735fb27ba70594c063981a5af45/crio/crio-15f4b8aec001bce5420ce6974f6808668a8958376c9f4518777d6573ff2e8a20/freezer.state
	I0111 08:26:43.727747  618979 api_server.go:214] freezer state: "THAWED"
	I0111 08:26:43.727778  618979 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0111 08:26:43.735875  618979 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0111 08:26:43.735906  618979 status.go:463] ha-514872 apiserver status = Running (err=<nil>)
	I0111 08:26:43.735917  618979 status.go:176] ha-514872 status: &{Name:ha-514872 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 08:26:43.735933  618979 status.go:174] checking status of ha-514872-m02 ...
	I0111 08:26:43.736240  618979 cli_runner.go:164] Run: docker container inspect ha-514872-m02 --format={{.State.Status}}
	I0111 08:26:43.753906  618979 status.go:371] ha-514872-m02 host status = "Stopped" (err=<nil>)
	I0111 08:26:43.753931  618979 status.go:384] host is not running, skipping remaining checks
	I0111 08:26:43.753938  618979 status.go:176] ha-514872-m02 status: &{Name:ha-514872-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 08:26:43.753959  618979 status.go:174] checking status of ha-514872-m03 ...
	I0111 08:26:43.754342  618979 cli_runner.go:164] Run: docker container inspect ha-514872-m03 --format={{.State.Status}}
	I0111 08:26:43.771460  618979 status.go:371] ha-514872-m03 host status = "Running" (err=<nil>)
	I0111 08:26:43.771487  618979 host.go:66] Checking if "ha-514872-m03" exists ...
	I0111 08:26:43.771815  618979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-514872-m03
	I0111 08:26:43.792427  618979 host.go:66] Checking if "ha-514872-m03" exists ...
	I0111 08:26:43.792821  618979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:26:43.792870  618979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-514872-m03
	I0111 08:26:43.816880  618979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/ha-514872-m03/id_rsa Username:docker}
	I0111 08:26:43.927677  618979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:26:43.942457  618979 kubeconfig.go:125] found "ha-514872" server: "https://192.168.49.254:8443"
	I0111 08:26:43.942483  618979 api_server.go:166] Checking apiserver status ...
	I0111 08:26:43.942525  618979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:26:43.953688  618979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	I0111 08:26:43.963910  618979 api_server.go:192] apiserver freezer: "10:freezer:/docker/9e334ba6aec0855829267c7b1bcca58fe01923cad3e86ac9a435d8bf0f4525a3/crio/crio-6ae6ed8c588c5a518ff5c25da6b1ae6c567e84d9626a8eb7468acbe2137732e0"
	I0111 08:26:43.964026  618979 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9e334ba6aec0855829267c7b1bcca58fe01923cad3e86ac9a435d8bf0f4525a3/crio/crio-6ae6ed8c588c5a518ff5c25da6b1ae6c567e84d9626a8eb7468acbe2137732e0/freezer.state
	I0111 08:26:43.971664  618979 api_server.go:214] freezer state: "THAWED"
	I0111 08:26:43.971738  618979 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0111 08:26:43.979759  618979 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0111 08:26:43.979831  618979 status.go:463] ha-514872-m03 apiserver status = Running (err=<nil>)
	I0111 08:26:43.979856  618979 status.go:176] ha-514872-m03 status: &{Name:ha-514872-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 08:26:43.979885  618979 status.go:174] checking status of ha-514872-m04 ...
	I0111 08:26:43.980215  618979 cli_runner.go:164] Run: docker container inspect ha-514872-m04 --format={{.State.Status}}
	I0111 08:26:43.998266  618979 status.go:371] ha-514872-m04 host status = "Running" (err=<nil>)
	I0111 08:26:43.998288  618979 host.go:66] Checking if "ha-514872-m04" exists ...
	I0111 08:26:43.998583  618979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-514872-m04
	I0111 08:26:44.019764  618979 host.go:66] Checking if "ha-514872-m04" exists ...
	I0111 08:26:44.020091  618979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:26:44.020142  618979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-514872-m04
	I0111 08:26:44.040219  618979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/ha-514872-m04/id_rsa Username:docker}
	I0111 08:26:44.145364  618979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:26:44.164843  618979 status.go:176] ha-514872-m04 status: &{Name:ha-514872-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 node start m02 --alsologtostderr -v 5
E0111 08:27:04.133169  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 node start m02 --alsologtostderr -v 5: (20.187885816s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5: (1.233658327s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.308910222s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (111.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 stop --alsologtostderr -v 5
E0111 08:27:08.541236  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:08.546966  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:08.557161  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:08.577419  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:08.617745  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:08.697994  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:08.858789  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:09.179736  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:09.820952  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:11.101330  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:13.662328  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:18.783345  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:29.023566  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:31.832475  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 stop --alsologtostderr -v 5: (37.504586621s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 start --wait true --alsologtostderr -v 5
E0111 08:27:49.504601  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:30.465581  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 start --wait true --alsologtostderr -v 5: (1m13.351319627s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (111.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 node delete m03 --alsologtostderr -v 5: (11.46614728s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 stop --alsologtostderr -v 5: (35.974022316s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5: exit status 7 (114.319221ms)

                                                
                                                
-- stdout --
	ha-514872
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-514872-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-514872-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:29:48.285574  631003 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:29:48.285712  631003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:29:48.285721  631003 out.go:374] Setting ErrFile to fd 2...
	I0111 08:29:48.285726  631003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:29:48.286002  631003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:29:48.286227  631003 out.go:368] Setting JSON to false
	I0111 08:29:48.286256  631003 mustload.go:66] Loading cluster: ha-514872
	I0111 08:29:48.286302  631003 notify.go:221] Checking for updates...
	I0111 08:29:48.287187  631003 config.go:182] Loaded profile config "ha-514872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:29:48.287214  631003 status.go:174] checking status of ha-514872 ...
	I0111 08:29:48.288378  631003 cli_runner.go:164] Run: docker container inspect ha-514872 --format={{.State.Status}}
	I0111 08:29:48.308929  631003 status.go:371] ha-514872 host status = "Stopped" (err=<nil>)
	I0111 08:29:48.308955  631003 status.go:384] host is not running, skipping remaining checks
	I0111 08:29:48.308963  631003 status.go:176] ha-514872 status: &{Name:ha-514872 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 08:29:48.308993  631003 status.go:174] checking status of ha-514872-m02 ...
	I0111 08:29:48.309297  631003 cli_runner.go:164] Run: docker container inspect ha-514872-m02 --format={{.State.Status}}
	I0111 08:29:48.331747  631003 status.go:371] ha-514872-m02 host status = "Stopped" (err=<nil>)
	I0111 08:29:48.331774  631003 status.go:384] host is not running, skipping remaining checks
	I0111 08:29:48.331781  631003 status.go:176] ha-514872-m02 status: &{Name:ha-514872-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 08:29:48.331799  631003 status.go:174] checking status of ha-514872-m04 ...
	I0111 08:29:48.332095  631003 cli_runner.go:164] Run: docker container inspect ha-514872-m04 --format={{.State.Status}}
	I0111 08:29:48.353752  631003 status.go:371] ha-514872-m04 host status = "Stopped" (err=<nil>)
	I0111 08:29:48.353777  631003 status.go:384] host is not running, skipping remaining checks
	I0111 08:29:48.353785  631003 status.go:176] ha-514872-m04 status: &{Name:ha-514872-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (71.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0111 08:29:52.386063  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m10.463551155s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (71.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.047947946s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 node add --control-plane --alsologtostderr -v 5: (47.274024937s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-514872 status --alsologtostderr -v 5: (1.046438224s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.131652097s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-656609 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0111 08:32:04.135786  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:08.542726  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:36.226324  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-656609 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (45.974598759s)
--- PASS: TestJSONOutput/start/Command (45.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-656609 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-656609 --output=json --user=testUser: (5.874947814s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-135566 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-135566 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (112.084322ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"84260a6e-e4ad-4718-9b09-e52ef946f46b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-135566] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a981e48-8b62-4791-88dc-20ce0d707272","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22402"}}
	{"specversion":"1.0","id":"79213ddd-ec55-4ff6-9328-bfc64d7b5e78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"203dd3ad-2f7b-4ba9-b219-d5c0c3bb1f27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig"}}
	{"specversion":"1.0","id":"8b8c18db-bcac-47af-bd78-1940aa101953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube"}}
	{"specversion":"1.0","id":"36d480a9-a337-4132-965e-caebc261e00c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"337bf7ba-9a7f-473b-8008-1d2083b554a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9ea27fc2-99c7-4751-8dcb-58029a753f75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-135566" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-135566
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-515307 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-515307 --network=: (31.613746677s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-515307" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-515307
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-515307: (2.182921729s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.82s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-586442 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-586442 --network=bridge: (30.141637646s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-586442" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-586442
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-586442: (2.049471807s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.22s)

                                                
                                    
x
+
TestKicExistingNetwork (28.63s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0111 08:34:05.945898  576907 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0111 08:34:05.961844  576907 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0111 08:34:05.962736  576907 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0111 08:34:05.962779  576907 cli_runner.go:164] Run: docker network inspect existing-network
W0111 08:34:05.978961  576907 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0111 08:34:05.978994  576907 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0111 08:34:05.979010  576907 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0111 08:34:05.979112  576907 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 08:34:05.996407  576907 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-113e3e286bbe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:2e:86:95:08:19} reservation:<nil>}
I0111 08:34:05.996752  576907 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ffb3d0}
I0111 08:34:05.996782  576907 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0111 08:34:05.996833  576907 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0111 08:34:06.073361  576907 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-393126 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-393126 --network=existing-network: (26.387357911s)
helpers_test.go:176: Cleaning up "existing-network-393126" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-393126
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-393126: (2.080871543s)
I0111 08:34:34.558637  576907 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (28.63s)

                                                
                                    
x
+
TestKicCustomSubnet (26.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-377672 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-377672 --subnet=192.168.60.0/24: (24.345527758s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-377672 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-377672" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-377672
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-377672: (2.266041634s)
--- PASS: TestKicCustomSubnet (26.64s)

                                                
                                    
x
+
TestKicStaticIP (31.89s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-746799 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-746799 --static-ip=192.168.200.200: (29.515988541s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-746799 ip
helpers_test.go:176: Cleaning up "static-ip-746799" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-746799
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-746799: (2.215119905s)
--- PASS: TestKicStaticIP (31.89s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (62.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-195026 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-195026 --driver=docker  --container-runtime=crio: (26.547166395s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-197570 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-197570 --driver=docker  --container-runtime=crio: (30.463052799s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-195026
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-197570
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-197570" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-197570
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-197570: (2.109462313s)
helpers_test.go:176: Cleaning up "first-195026" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-195026
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-195026: (2.358875201s)
--- PASS: TestMinikubeProfile (62.89s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-029901 --memory=3072 --mount-string /tmp/TestMountStartserial1054736464/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-029901 --memory=3072 --mount-string /tmp/TestMountStartserial1054736464/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.952322038s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-029901 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-031780 --memory=3072 --mount-string /tmp/TestMountStartserial1054736464/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-031780 --memory=3072 --mount-string /tmp/TestMountStartserial1054736464/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.728070137s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-031780 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-029901 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-029901 --alsologtostderr -v=5: (1.708194761s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-031780 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-031780
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-031780: (1.300371033s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.51s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-031780
E0111 08:37:04.132294  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-031780: (7.50998134s)
--- PASS: TestMountStart/serial/RestartStopped (8.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-031780 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-869861 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0111 08:37:08.541439  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-869861 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m14.468868238s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- rollout status deployment/busybox
E0111 08:38:27.192733  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-869861 -- rollout status deployment/busybox: (3.619668227s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-c27ns -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-869861 -- exec busybox-769dd8b7dd-gs5f9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-869861 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-869861 -v=5 --alsologtostderr: (28.118861585s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-869861 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp testdata/cp-test.txt multinode-869861:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp multinode-869861:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2291904672/001/cp-test_multinode-869861.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp multinode-869861:/home/docker/cp-test.txt multinode-869861-m02:/home/docker/cp-test_multinode-869861_multinode-869861-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m02 "sudo cat /home/docker/cp-test_multinode-869861_multinode-869861-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp multinode-869861:/home/docker/cp-test.txt multinode-869861-m03:/home/docker/cp-test_multinode-869861_multinode-869861-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m03 "sudo cat /home/docker/cp-test_multinode-869861_multinode-869861-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp testdata/cp-test.txt multinode-869861-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp multinode-869861-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2291904672/001/cp-test_multinode-869861-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp multinode-869861-m02:/home/docker/cp-test.txt multinode-869861:/home/docker/cp-test_multinode-869861-m02_multinode-869861.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861 "sudo cat /home/docker/cp-test_multinode-869861-m02_multinode-869861.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp multinode-869861-m02:/home/docker/cp-test.txt multinode-869861-m03:/home/docker/cp-test_multinode-869861-m02_multinode-869861-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m03 "sudo cat /home/docker/cp-test_multinode-869861-m02_multinode-869861-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp testdata/cp-test.txt multinode-869861-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp multinode-869861-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2291904672/001/cp-test_multinode-869861-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp multinode-869861-m03:/home/docker/cp-test.txt multinode-869861:/home/docker/cp-test_multinode-869861-m03_multinode-869861.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861 "sudo cat /home/docker/cp-test_multinode-869861-m03_multinode-869861.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 cp multinode-869861-m03:/home/docker/cp-test.txt multinode-869861-m02:/home/docker/cp-test_multinode-869861-m03_multinode-869861-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 ssh -n multinode-869861-m02 "sudo cat /home/docker/cp-test_multinode-869861-m03_multinode-869861-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-869861 node stop m03: (1.317119786s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-869861 status: exit status 7 (536.611601ms)

                                                
                                                
-- stdout --
	multinode-869861
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-869861-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-869861-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-869861 status --alsologtostderr: exit status 7 (543.672284ms)

                                                
                                                
-- stdout --
	multinode-869861
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-869861-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-869861-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:39:11.975560  681594 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:39:11.975790  681594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:39:11.975820  681594 out.go:374] Setting ErrFile to fd 2...
	I0111 08:39:11.975848  681594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:39:11.976474  681594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:39:11.976779  681594 out.go:368] Setting JSON to false
	I0111 08:39:11.976825  681594 mustload.go:66] Loading cluster: multinode-869861
	I0111 08:39:11.977797  681594 config.go:182] Loaded profile config "multinode-869861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:39:11.977843  681594 status.go:174] checking status of multinode-869861 ...
	I0111 08:39:11.978423  681594 notify.go:221] Checking for updates...
	I0111 08:39:11.979263  681594 cli_runner.go:164] Run: docker container inspect multinode-869861 --format={{.State.Status}}
	I0111 08:39:12.000508  681594 status.go:371] multinode-869861 host status = "Running" (err=<nil>)
	I0111 08:39:12.000536  681594 host.go:66] Checking if "multinode-869861" exists ...
	I0111 08:39:12.000988  681594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-869861
	I0111 08:39:12.034836  681594 host.go:66] Checking if "multinode-869861" exists ...
	I0111 08:39:12.035142  681594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:39:12.035184  681594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-869861
	I0111 08:39:12.054663  681594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33638 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/multinode-869861/id_rsa Username:docker}
	I0111 08:39:12.159604  681594 ssh_runner.go:195] Run: systemctl --version
	I0111 08:39:12.166289  681594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:39:12.179632  681594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:39:12.246827  681594 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-11 08:39:12.236093202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:39:12.247373  681594 kubeconfig.go:125] found "multinode-869861" server: "https://192.168.67.2:8443"
	I0111 08:39:12.247416  681594 api_server.go:166] Checking apiserver status ...
	I0111 08:39:12.247468  681594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:39:12.259176  681594 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup
	I0111 08:39:12.267658  681594 api_server.go:192] apiserver freezer: "10:freezer:/docker/0b8ad918bd02e1d31df659d5274e8caafe78b17d35b80de8b08f362e7d969f72/crio/crio-69ccebff66254a447eb46b7c71dd118a52d61c3c110706213df51ca3fdaf840c"
	I0111 08:39:12.267728  681594 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0b8ad918bd02e1d31df659d5274e8caafe78b17d35b80de8b08f362e7d969f72/crio/crio-69ccebff66254a447eb46b7c71dd118a52d61c3c110706213df51ca3fdaf840c/freezer.state
	I0111 08:39:12.275313  681594 api_server.go:214] freezer state: "THAWED"
	I0111 08:39:12.275347  681594 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0111 08:39:12.283838  681594 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0111 08:39:12.283867  681594 status.go:463] multinode-869861 apiserver status = Running (err=<nil>)
	I0111 08:39:12.283879  681594 status.go:176] multinode-869861 status: &{Name:multinode-869861 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 08:39:12.283894  681594 status.go:174] checking status of multinode-869861-m02 ...
	I0111 08:39:12.284218  681594 cli_runner.go:164] Run: docker container inspect multinode-869861-m02 --format={{.State.Status}}
	I0111 08:39:12.301008  681594 status.go:371] multinode-869861-m02 host status = "Running" (err=<nil>)
	I0111 08:39:12.301033  681594 host.go:66] Checking if "multinode-869861-m02" exists ...
	I0111 08:39:12.301337  681594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-869861-m02
	I0111 08:39:12.318716  681594 host.go:66] Checking if "multinode-869861-m02" exists ...
	I0111 08:39:12.319046  681594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:39:12.319095  681594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-869861-m02
	I0111 08:39:12.336908  681594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33643 SSHKeyPath:/home/jenkins/minikube-integration/22402-575040/.minikube/machines/multinode-869861-m02/id_rsa Username:docker}
	I0111 08:39:12.439620  681594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:39:12.452537  681594 status.go:176] multinode-869861-m02 status: &{Name:multinode-869861-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0111 08:39:12.452570  681594 status.go:174] checking status of multinode-869861-m03 ...
	I0111 08:39:12.452886  681594 cli_runner.go:164] Run: docker container inspect multinode-869861-m03 --format={{.State.Status}}
	I0111 08:39:12.471168  681594 status.go:371] multinode-869861-m03 host status = "Stopped" (err=<nil>)
	I0111 08:39:12.471194  681594 status.go:384] host is not running, skipping remaining checks
	I0111 08:39:12.471202  681594 status.go:176] multinode-869861-m03 status: &{Name:multinode-869861-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-869861 node start m03 -v=5 --alsologtostderr: (7.23183814s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-869861
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-869861
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-869861: (25.032951573s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-869861 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-869861 --wait=true -v=5 --alsologtostderr: (51.965648288s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-869861
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-869861 node delete m03: (4.825904925s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-869861 stop: (23.829088794s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-869861 status: exit status 7 (98.586897ms)

                                                
                                                
-- stdout --
	multinode-869861
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-869861-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-869861 status --alsologtostderr: exit status 7 (95.456306ms)

                                                
                                                
-- stdout --
	multinode-869861
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-869861-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:41:07.130101  689456 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:41:07.130411  689456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:41:07.130448  689456 out.go:374] Setting ErrFile to fd 2...
	I0111 08:41:07.130470  689456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:41:07.130749  689456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:41:07.130979  689456 out.go:368] Setting JSON to false
	I0111 08:41:07.131037  689456 mustload.go:66] Loading cluster: multinode-869861
	I0111 08:41:07.131130  689456 notify.go:221] Checking for updates...
	I0111 08:41:07.131492  689456 config.go:182] Loaded profile config "multinode-869861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:41:07.131534  689456 status.go:174] checking status of multinode-869861 ...
	I0111 08:41:07.132104  689456 cli_runner.go:164] Run: docker container inspect multinode-869861 --format={{.State.Status}}
	I0111 08:41:07.150799  689456 status.go:371] multinode-869861 host status = "Stopped" (err=<nil>)
	I0111 08:41:07.150824  689456 status.go:384] host is not running, skipping remaining checks
	I0111 08:41:07.150831  689456 status.go:176] multinode-869861 status: &{Name:multinode-869861 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 08:41:07.150863  689456 status.go:174] checking status of multinode-869861-m02 ...
	I0111 08:41:07.151183  689456 cli_runner.go:164] Run: docker container inspect multinode-869861-m02 --format={{.State.Status}}
	I0111 08:41:07.177222  689456 status.go:371] multinode-869861-m02 host status = "Stopped" (err=<nil>)
	I0111 08:41:07.177247  689456 status.go:384] host is not running, skipping remaining checks
	I0111 08:41:07.177254  689456 status.go:176] multinode-869861-m02 status: &{Name:multinode-869861-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-869861 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-869861 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (53.358381076s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-869861 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-869861
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-869861-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-869861-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.780388ms)

                                                
                                                
-- stdout --
	* [multinode-869861-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-869861-m02' is duplicated with machine name 'multinode-869861-m02' in profile 'multinode-869861'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-869861-m03 --driver=docker  --container-runtime=crio
E0111 08:42:04.133183  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:42:08.543381  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-869861-m03 --driver=docker  --container-runtime=crio: (26.889327124s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-869861
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-869861: exit status 80 (346.885197ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-869861 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-869861-m03 already exists in multinode-869861-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-869861-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-869861-m03: (2.083190052s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.49s)

                                                
                                    
x
+
TestScheduledStopUnix (103.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-415795 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-415795 --memory=3072 --driver=docker  --container-runtime=crio: (27.269951724s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-415795 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 08:43:02.384047  697883 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:43:02.384240  697883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:43:02.384272  697883 out.go:374] Setting ErrFile to fd 2...
	I0111 08:43:02.384292  697883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:43:02.384590  697883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:43:02.384891  697883 out.go:368] Setting JSON to false
	I0111 08:43:02.385033  697883 mustload.go:66] Loading cluster: scheduled-stop-415795
	I0111 08:43:02.385427  697883 config.go:182] Loaded profile config "scheduled-stop-415795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:43:02.385540  697883 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/scheduled-stop-415795/config.json ...
	I0111 08:43:02.385764  697883 mustload.go:66] Loading cluster: scheduled-stop-415795
	I0111 08:43:02.385926  697883 config.go:182] Loaded profile config "scheduled-stop-415795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-415795 -n scheduled-stop-415795
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 08:43:02.836524  697973 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:43:02.836691  697973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:43:02.836721  697973 out.go:374] Setting ErrFile to fd 2...
	I0111 08:43:02.836743  697973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:43:02.837027  697973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:43:02.837345  697973 out.go:368] Setting JSON to false
	I0111 08:43:02.839032  697973 daemonize_unix.go:73] killing process 697906 as it is an old scheduled stop
	I0111 08:43:02.842266  697973 mustload.go:66] Loading cluster: scheduled-stop-415795
	I0111 08:43:02.842952  697973 config.go:182] Loaded profile config "scheduled-stop-415795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:43:02.843065  697973 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/scheduled-stop-415795/config.json ...
	I0111 08:43:02.843295  697973 mustload.go:66] Loading cluster: scheduled-stop-415795
	I0111 08:43:02.843466  697973 config.go:182] Loaded profile config "scheduled-stop-415795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0111 08:43:02.849089  576907 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/scheduled-stop-415795/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-415795 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-415795 -n scheduled-stop-415795
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-415795
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-415795 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 08:43:28.762848  698457 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:43:28.763041  698457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:43:28.763068  698457 out.go:374] Setting ErrFile to fd 2...
	I0111 08:43:28.763091  698457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:43:28.763434  698457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:43:28.763754  698457 out.go:368] Setting JSON to false
	I0111 08:43:28.763891  698457 mustload.go:66] Loading cluster: scheduled-stop-415795
	I0111 08:43:28.764286  698457 config.go:182] Loaded profile config "scheduled-stop-415795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:43:28.764420  698457 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/scheduled-stop-415795/config.json ...
	I0111 08:43:28.764657  698457 mustload.go:66] Loading cluster: scheduled-stop-415795
	I0111 08:43:28.764823  698457 config.go:182] Loaded profile config "scheduled-stop-415795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
E0111 08:43:31.586568  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-415795
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-415795: exit status 7 (69.002496ms)

                                                
                                                
-- stdout --
	scheduled-stop-415795
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-415795 -n scheduled-stop-415795
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-415795 -n scheduled-stop-415795: exit status 7 (69.191586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-415795" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-415795
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-415795: (4.747047282s)
--- PASS: TestScheduledStopUnix (103.60s)

                                                
                                    
x
+
TestInsufficientStorage (12.51s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-616205 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-616205 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.958963882s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"480fbf4b-3d02-4626-b670-f83f29f8f995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-616205] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f54dbf8b-c5e2-4d94-aafc-ee8fe6298bd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22402"}}
	{"specversion":"1.0","id":"a8d5f71d-361c-49b5-85da-be4838dd2106","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1ce0f49a-bb78-4d92-9bac-386cdfba31ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig"}}
	{"specversion":"1.0","id":"be4ecf50-e558-4107-95f6-e745db218fb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube"}}
	{"specversion":"1.0","id":"3f7d0037-f0a4-45d0-9e52-d256695d5aa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cb194220-393f-439c-8068-78c412bd23d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f5b07ff9-b8a9-45e5-af29-1f18b5a7e58b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7756cfbb-ba9e-417b-88c0-699683608664","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d3df486a-c498-440a-a462-e57ee75b9c3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e34c829-105b-4192-8302-81bafe79a077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6e632656-45ae-4438-b57c-84c5e3e4674d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-616205\" primary control-plane node in \"insufficient-storage-616205\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f734c258-83dc-4dee-9612-8c57d7d35671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1768032998-22402 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"68a1cc9a-b413-4740-8b69-ecb8db42cb7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2333aa55-bd33-4da3-9765-63f269e438ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-616205 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-616205 --output=json --layout=cluster: exit status 7 (304.0606ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-616205","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-616205","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:44:28.910807  700323 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-616205" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-616205 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-616205 --output=json --layout=cluster: exit status 7 (298.074584ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-616205","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-616205","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:44:29.209539  700391 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-616205" does not appear in /home/jenkins/minikube-integration/22402-575040/kubeconfig
	E0111 08:44:29.219644  700391 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/insufficient-storage-616205/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-616205" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-616205
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-616205: (1.94655506s)
--- PASS: TestInsufficientStorage (12.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (313.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.386643722 start -p running-upgrade-912166 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.386643722 start -p running-upgrade-912166 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.227195242s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-912166 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0111 08:52:04.133249  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:52:08.541210  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-912166 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.912058875s)
helpers_test.go:176: Cleaning up "running-upgrade-912166" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-912166
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-912166: (3.095170271s)
--- PASS: TestRunningBinaryUpgrade (313.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (100.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.159395997s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-102854 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-102854 --alsologtostderr: (1.33119758s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-102854 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-102854 status --format={{.Host}}: exit status 7 (72.778613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0111 08:47:04.133168  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:47:08.541935  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.412777108s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-102854 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (198.127244ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-102854] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-102854
	    minikube start -p kubernetes-upgrade-102854 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1028542 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-102854 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-102854 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.062833402s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-102854" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-102854
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-102854: (2.81539503s)
--- PASS: TestKubernetesUpgrade (100.20s)

                                                
                                    
x
+
TestMissingContainerUpgrade (114.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1009208818 start -p missing-upgrade-819079 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1009208818 start -p missing-upgrade-819079 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.083037395s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-819079
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-819079
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-819079 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-819079 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.171652215s)
helpers_test.go:176: Cleaning up "missing-upgrade-819079" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-819079
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-819079: (2.01731233s)
--- PASS: TestMissingContainerUpgrade (114.12s)

                                                
                                    
x
+
TestPause/serial/Start (55.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-042270 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-042270 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.978749274s)
--- PASS: TestPause/serial/Start (55.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (120.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-042270 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-042270 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m0.583277111s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (120.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (307.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2464579396 start -p stopped-upgrade-974287 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2464579396 start -p stopped-upgrade-974287 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.47475636s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2464579396 -p stopped-upgrade-974287 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2464579396 -p stopped-upgrade-974287 stop: (1.278244354s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-974287 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-974287 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.177582032s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (307.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-974287
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-974287: (2.101386545s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.10s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (71.86s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-821036 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-821036 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m5.196139194s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-821036 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-821036
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-821036: (5.867956907s)
--- PASS: TestPreload/Start-NoPreload-PullImage (71.86s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (45.5s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-821036 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-821036 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (45.262660358s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-821036 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (45.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-812923 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-812923 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (102.382002ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-812923] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-812923 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0111 08:55:07.193668  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-812923 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.694160236s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-812923 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-812923 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-812923 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.100848276s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-812923 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-812923 status -o json: exit status 2 (323.207855ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-812923","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-812923
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-812923: (2.006614535s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-812923 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-812923 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.979399452s)
--- PASS: TestNoKubernetes/serial/Start (7.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22402-575040/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-812923 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-812923 "sudo systemctl is-active --quiet service kubelet": exit status 1 (303.060713ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-812923
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-812923: (1.306073296s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-812923 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-812923 --driver=docker  --container-runtime=crio: (7.392289084s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-812923 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-812923 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.107321ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-293572 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-293572 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (197.757345ms)

                                                
                                                
-- stdout --
	* [false-293572] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:55:51.277532  751370 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:55:51.277848  751370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:55:51.277883  751370 out.go:374] Setting ErrFile to fd 2...
	I0111 08:55:51.277904  751370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:55:51.278484  751370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-575040/.minikube/bin
	I0111 08:55:51.278967  751370 out.go:368] Setting JSON to false
	I0111 08:55:51.279839  751370 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13101,"bootTime":1768108650,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0111 08:55:51.279943  751370 start.go:143] virtualization:  
	I0111 08:55:51.283465  751370 out.go:179] * [false-293572] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:55:51.287248  751370 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:55:51.287341  751370 notify.go:221] Checking for updates...
	I0111 08:55:51.293200  751370 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:55:51.296126  751370 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-575040/kubeconfig
	I0111 08:55:51.299082  751370 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-575040/.minikube
	I0111 08:55:51.301970  751370 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:55:51.304858  751370 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:55:51.308439  751370 config.go:182] Loaded profile config "force-systemd-env-472282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0111 08:55:51.308605  751370 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:55:51.340215  751370 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:55:51.340328  751370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:55:51.406992  751370 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:55:51.396978166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:55:51.407116  751370 docker.go:319] overlay module found
	I0111 08:55:51.410217  751370 out.go:179] * Using the docker driver based on user configuration
	I0111 08:55:51.412992  751370 start.go:309] selected driver: docker
	I0111 08:55:51.413005  751370 start.go:928] validating driver "docker" against <nil>
	I0111 08:55:51.413018  751370 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:55:51.416548  751370 out.go:203] 
	W0111 08:55:51.419425  751370 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0111 08:55:51.422210  751370 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-293572 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-293572" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-293572

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-293572"

                                                
                                                
----------------------- debugLogs end: false-293572 [took: 3.272857675s] --------------------------------
helpers_test.go:176: Cleaning up "false-293572" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-293572
--- PASS: TestNetworkPlugins/group/false (3.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.563191281s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-931581 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0d413a31-5797-4ca1-95a0-a108b606a94b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0d413a31-5797-4ca1-95a0-a108b606a94b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003350493s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-931581 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-931581 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-931581 --alsologtostderr -v=3: (12.00145658s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-931581 -n old-k8s-version-931581
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-931581 -n old-k8s-version-931581: exit status 7 (67.131445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-931581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-931581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.829924176s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-931581 -n old-k8s-version-931581
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-cnrhh" [ab919fee-d1a7-4612-9a7b-adf934b0d7c4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003451953s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-cnrhh" [ab919fee-d1a7-4612-9a7b-adf934b0d7c4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005198191s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-931581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-931581 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (56.099361518s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-236664 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [544a96f5-758e-43eb-b70f-1c53d81f1687] Pending
helpers_test.go:353: "busybox" [544a96f5-758e-43eb-b70f-1c53d81f1687] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [544a96f5-758e-43eb-b70f-1c53d81f1687] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004335943s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-236664 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-236664 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-236664 --alsologtostderr -v=3: (11.999549164s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-236664 -n no-preload-236664
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-236664 -n no-preload-236664: exit status 7 (81.720028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-236664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-236664 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (49.346799273s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-236664 -n no-preload-236664
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-s44cv" [a491d9f9-cf78-4b5a-bff9-470f486d392a] Running
E0111 09:07:04.133134  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002692167s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-s44cv" [a491d9f9-cf78-4b5a-bff9-470f486d392a] Running
E0111 09:07:08.541910  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003261261s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-236664 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-236664 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (47.837259104s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-630626 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0d555ec4-fa89-4024-98df-7787a1b7c069] Pending
helpers_test.go:353: "busybox" [0d555ec4-fa89-4024-98df-7787a1b7c069] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0d555ec4-fa89-4024-98df-7787a1b7c069] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.002692974s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-630626 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.242133957s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-630626 --alsologtostderr -v=3
E0111 09:08:20.590944  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:08:23.151582  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:08:28.271823  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-630626 --alsologtostderr -v=3: (12.187330168s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-630626 -n embed-certs-630626
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-630626 -n embed-certs-630626: exit status 7 (90.721216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-630626 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0111 09:08:38.512672  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-630626 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (53.960536122s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-630626 -n embed-certs-630626
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-588333 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0111 09:08:58.993355  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [d123dc54-6086-4b61-9c4a-b6591f715b33] Pending
helpers_test.go:353: "busybox" [d123dc54-6086-4b61-9c4a-b6591f715b33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d123dc54-6086-4b61-9c4a-b6591f715b33] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003657478s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-588333 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-588333 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-588333 --alsologtostderr -v=3: (12.012626226s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333: exit status 7 (75.714392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-588333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-588333 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (51.99442587s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-588333 -n default-k8s-diff-port-588333
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-wpbkc" [6da35c6e-01fa-4c84-84af-3b784d77fd9b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003975176s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-wpbkc" [6da35c6e-01fa-4c84-84af-3b784d77fd9b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00446175s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-630626 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-630626 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (33.515209288s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-72rrq" [e0c965e4-3c4e-46b7-b4d5-9afc34f79c6c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003496968s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-72rrq" [e0c965e4-3c4e-46b7-b4d5-9afc34f79c6c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003969355s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-588333 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-588333 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-193049 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-193049 --alsologtostderr -v=3: (1.681285555s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-193049 -n newest-cni-193049
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-193049 -n newest-cni-193049: exit status 7 (102.706574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-193049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-193049 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (15.71015505s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-193049 -n newest-cni-193049
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.41s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (5.38s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-064330 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-064330 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (5.148378875s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-064330" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-064330
--- PASS: TestPreload/PreloadSrc/gcs (5.38s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (5.46s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-068348 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-068348 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (5.146899188s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-068348" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-068348
--- PASS: TestPreload/PreloadSrc/github (5.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-193049 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (1.19s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-560704 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
E0111 09:10:47.507606  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-560704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-560704
--- PASS: TestPreload/PreloadSrc/gcs-cached (1.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0111 09:10:48.788029  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (50.28596665s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0111 09:10:56.474290  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:11:01.877346  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:11:06.714803  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:11:27.195740  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (51.113959082s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-293572 "pgrep -a kubelet"
I0111 09:11:39.198831  576907 config.go:182] Loaded profile config "auto-293572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-293572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rrvpg" [082fb44b-26fc-4a5c-be80-f20a40189268] Pending
helpers_test.go:353: "netcat-5dd4ccdc4b-rrvpg" [082fb44b-26fc-4a5c-be80-f20a40189268] Running
E0111 09:11:47.193917  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004259593s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-pv456" [9ed6f95c-5253-47d3-be0c-55db34f55425] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004159613s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-293572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-293572 "pgrep -a kubelet"
I0111 09:11:53.744280  576907 config.go:182] Loaded profile config "kindnet-293572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-293572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-vgp7c" [3369ef09-d03b-45f1-9ef0-4f0207edf0f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-vgp7c" [3369ef09-d03b-45f1-9ef0-4f0207edf0f0] Running
E0111 09:12:04.132894  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003617266s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-293572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m14.265825975s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0111 09:13:18.029403  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/old-k8s-version-931581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.605257216s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-bm2rv" [484ed2c2-5452-4379-9613-e9926a8b5f48] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004580697s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-293572 "pgrep -a kubelet"
I0111 09:13:29.202842  576907 config.go:182] Loaded profile config "custom-flannel-293572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-293572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-czhzf" [dd457ab0-30cd-4755-a96d-e12c79efa716] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0111 09:13:30.077359  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/no-preload-236664/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-czhzf" [dd457ab0-30cd-4755-a96d-e12c79efa716] Running
I0111 09:13:33.619596  576907 config.go:182] Loaded profile config "calico-293572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003238633s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-293572 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-293572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-9vl49" [7d5614d2-3d46-47cf-bf8e-91cf7fca8a11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-9vl49" [7d5614d2-3d46-47cf-bf8e-91cf7fca8a11] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003489356s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-293572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-293572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m7.376169665s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0111 09:14:19.464153  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:14:39.945006  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (57.944376018s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-lnng5" [3440d810-1580-4f06-82aa-98a7e2815d0a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003812322s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-293572 "pgrep -a kubelet"
I0111 09:15:13.582920  576907 config.go:182] Loaded profile config "enable-default-cni-293572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-293572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lfjcx" [20a5404f-3ab3-442a-b50e-f6bbae5f831d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-lfjcx" [20a5404f-3ab3-442a-b50e-f6bbae5f831d] Running
E0111 09:15:20.905342  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/default-k8s-diff-port-588333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003793202s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-293572 "pgrep -a kubelet"
I0111 09:15:17.075895  576907 config.go:182] Loaded profile config "flannel-293572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-293572 replace --force -f testdata/netcat-deployment.yaml
I0111 09:15:17.375275  576907 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-pwqj8" [968b6bbb-f146-4820-bb5e-d6f5d98e85c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-pwqj8" [968b6bbb-f146-4820-bb5e-d6f5d98e85c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004144313s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-293572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-293572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-293572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m9.127591039s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-293572 "pgrep -a kubelet"
I0111 09:16:58.379853  576907 config.go:182] Loaded profile config "bridge-293572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-293572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-qmrq8" [1f510520-884c-4232-9d5a-34a76aca3bb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0111 09:16:59.994580  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/auto-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-qmrq8" [1f510520-884c-4232-9d5a-34a76aca3bb5] Running
E0111 09:17:04.133280  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/addons-328805/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:17:07.920821  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/kindnet-293572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 09:17:08.541944  576907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-575040/.minikube/profiles/functional-952579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003718624s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-293572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-293572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-783115 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-783115" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-783115
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-781777" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-781777
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-293572 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-293572" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-293572

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-293572"

                                                
                                                
----------------------- debugLogs end: kubenet-293572 [took: 3.307693059s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-293572" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-293572
--- SKIP: TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-293572 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-293572" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-293572

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-293572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-293572"

                                                
                                                
----------------------- debugLogs end: cilium-293572 [took: 3.670118345s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-293572" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-293572
--- SKIP: TestNetworkPlugins/group/cilium (3.82s)

                                                
                                    
Copied to clipboard